I already checked Numpy mean of nonzero values and it worked nicely. However, some rows of my matrix are all zero element. What is a good way to avoid RuntimeWarning: invalid value encountered in true_divide in this case? Also, I don't want the zero element to be replaced by Nan here.
eachPSM = np.ones([3,4])
eachPSM[0] = 0
print eachPSM
>> [[ 0. 0. 0. 0.]
[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]]
print np.true_divide(eachPSM.sum(1),(eachPSM!=0).sum(1))
>> RuntimeWarning: invalid value encountered in true_divide
[ nan 1. 1.]
With a as the input array, you could use masking -
invalid_val = np.nan # specifies mean value to be assigned for all zeros rows
out = np.full(a.shape[0],invalid_val)
count = (a!=0).sum(1)
valid_mask = count!=0
out[valid_mask] = a[valid_mask].sum(1)/count[valid_mask]
import warnings
...
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=RuntimeWarning)
eachPSM[np.isnan(eachPSM)] = 0
Since anything divided by 1 is same as the numerator you can fill zero by 1 i.e
x = eachPSM.sum(1)
y = (eachPSM!=0).sum(1)
y[y==0] = 1
np.true_divide(x,y)
#array([ 0., 1., 1.])
Masked array provide elegant solutions :
eachPSM = np.ones([3,4])
eachPSM[0] = 0
eachPSM[1,1] = 0
#[[ 0. 0. 0. 0.]
# [ 1. 0. 1. 1.]
# [ 1. 1. 1. 1.]]
In [39]: np.ma.masked_equal(eachPSM,0).mean(1)
Out[39]:
masked_array(data = [-- 1.0 1.0],
mask = [ True False False],
fill_value = 1e+20)
In [40]: np.ma.masked_equal(eachPSM,0).mean(1).data
Out[40]: array([ 0., 1., 1.])
Following code uses 'generic_filter' method, from ndimage (scipy python module), to calculate means for a sub matrix of 3x3 elements; considering each matrix element as central element (and excluding it) and avoiding border effects.
import numpy as np
from scipy import ndimage
a = np.reshape(np.arange(25),(5,5))
print a
matrix = np.array(a).astype(np.float)
mask = np.ones((3, 3))
mask[1, 1] = 0
result = ndimage.generic_filter(matrix, np.nanmean, footprint = mask, mode='constant', cval=np.NaN)
print result
Result is printed as follows:
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
[[ 4. 4. 5. 6. 6.66666667]
[ 5.6 6. 7. 8. 8.4 ]
[ 10.6 11. 12. 13. 13.4 ]
[ 15.6 16. 17. 18. 18.4 ]
[ 17.33333333 18. 19. 20. 20. ]]
and it could be corroborated that it works as expected [first mean is (5+6+1)/3 = 4, second mean is (0+6+5+7+2)/5 = 4; and it so on].
My question is how can I access to individual values (sub matrix) used by ndimage.generic_filter to calculate each np.nanmean?
ndimage.generic_filter calls an function once for each sub-array. You specify the function as the second argument. So if you change np.nanmean to a custom function func (see below) then you can have access to the sub-arrays from within func.
To accumulate a list of sub-arrays, which could be accessed after the call to ndimage.generic_filter, you could pass a list to func as an extra argument, and append the subarrays to the list inside func. You can then access that list (and its contents) afterwards:
import numpy as np
from scipy import ndimage
def func(x, subarrays):
print(x)
subarrays.append(x)
return np.nanmean(x)
a = np.reshape(np.arange(25),(5,5))
matrix = np.array(a).astype(np.float)
mask = np.ones((3, 3))
mask[1, 1] = 0
subarrays = []
result = ndimage.generic_filter(matrix, func, footprint = mask,
mode='constant', cval=np.NaN,
extra_arguments=(subarrays,))
print(result)
print(len(subarrays))
prints each subarray:
[ nan nan nan nan 1. nan 5. 6.]
[ nan nan nan 0. 2. 5. 6. 7.]
[ nan nan nan 1. 3. 6. 7. 8.]
[ nan nan nan 2. 4. 7. 8. 9.]
[ nan nan nan 3. nan 8. 9. nan]
[ nan 0. 1. nan 6. nan 10. 11.]
[ 0. 1. 2. 5. 7. 10. 11. 12.]
[ 1. 2. 3. 6. 8. 11. 12. 13.]
[ 2. 3. 4. 7. 9. 12. 13. 14.]
[ 3. 4. nan 8. nan 13. 14. nan]
[ nan 5. 6. nan 11. nan 15. 16.]
[ 5. 6. 7. 10. 12. 15. 16. 17.]
[ 6. 7. 8. 11. 13. 16. 17. 18.]
[ 7. 8. 9. 12. 14. 17. 18. 19.]
[ 8. 9. nan 13. nan 18. 19. nan]
[ nan 10. 11. nan 16. nan 20. 21.]
[ 10. 11. 12. 15. 17. 20. 21. 22.]
[ 11. 12. 13. 16. 18. 21. 22. 23.]
[ 12. 13. 14. 17. 19. 22. 23. 24.]
[ 13. 14. nan 18. nan 23. 24. nan]
[ nan 15. 16. nan 21. nan nan nan]
[ 15. 16. 17. 20. 22. nan nan nan]
[ 16. 17. 18. 21. 23. nan nan nan]
[ 17. 18. 19. 22. 24. nan nan nan]
[ 18. 19. nan 23. nan nan nan nan]
and prints the final result:
[[ 4. 4. 5. 6. 6.66666667]
[ 5.6 6. 7. 8. 8.4 ]
[ 10.6 11. 12. 13. 13.4 ]
[ 15.6 16. 17. 18. 18.4 ]
[ 17.33333333 18. 19. 20. 20. ]]
and the length of subarrays:
25
I am trying to get the output from FCN 32. I trained FCN32 with pascalcontext-fcn32-heavy.caffemodel pre-trained model. I could run for grayscale images with 5 classes. However, during inference, the output is all zero (a black image). This is inference code:
import numpy as np
from PIL import Image
import sys
import scipy.io as sio
from caffe.proto import caffe_pb2
import caffe
caffe.set_device(0)
caffe.set_mode_gpu()
# load image, subtract mean, and make dims C x H x W for Caffe
img_name='/home/ss/caffe-pascalcontext-fcn32s/dataset/Test/PNG/image-061-023.png' #+
im = Image.open(img_name)
in_ = np.array(im, dtype=np.float32)
in_ = np.expand_dims(in_, axis=0) #+
print in_.shape
#Read mean image
'''####################'''
mean_blob = caffe_pb2.BlobProto()
with open('/home/ss/caffe-pascalcontext-fcn32s/input/FCN32_mean.binaryproto') as f:
mean_blob.ParseFromString(f.read())
mean_array = np.asarray(mean_blob.data, dtype=np.float32).reshape(
(mean_blob.channels, mean_blob.height, mean_blob.width))
in_ -= mean_array
net_root = '/home/ss/caffe-pascalcontext-fcn32s'
MODEL_DEF = net_root + '/deploy.prototxt'
PRETRAINED = net_root + '/snapshot/FCN32s_train_iter_40000.caffemodel'
# load net
#net = caffe.Net('deploy.prototxt', 'snapshot/train_iter_640000.caffemodel', caffe.TEST)
net = caffe.Net(MODEL_DEF,PRETRAINED, caffe.TEST)
#net = caffe.Net('deploy.prototxt', 'snapshot_bak1/train_iter_400000.caffemodel', caffe.TEST)
# shape for input (data blob is N x C x H x W), set data
# put img to net
net.blobs['data'].reshape(1, *in_.shape) # 1: batch size, *in_.shape 3 channel ?
net.blobs['data'].data[...] = in_
# run net and take argmax for prediction
output = net.forward()
# print
def print_param(output):
# the blobs
print '--------------------------'
print 'the blobs'
for k, v in net.blobs.items():
print k, v.data.shape
# the parameters
print '--------------------------'
print 'the paramsters'
for k, v in net.params.items():
print k, v[0].data.shape
# the conv layer weights
print '--------------------------'
print 'the conv layer weights'
print net.params['conv1_1'][0].data
# the data blob
print '--------------------------'
print 'the data blob'
print net.blobs['data'].data
# the conv1_1 blob
print '--------------------------'
print 'the conv1_1 blob'
print net.blobs['conv1_1'].data
# the pool1 blob
print '--------------------------'
print 'the pool1 blob'
print net.blobs['pool1'].data
weights = net.blobs['fc6'].data[0]
print 'blobs fc6'
print np.unique(weights)
weights = net.blobs['fc7'].data[0]
print 'blobs fc7'
print np.unique(weights)
weights = net.blobs['score_fr_sign'].data[0]
print 'blobs score_fr_sign'
print np.unique(weights)
weights = net.blobs['upscore_sign'].data[0]
print 'blobs upscore_sign'
print np.unique(weights)
weights = net.blobs['score'].data[0]
print weights.shape #+
sio.savemat('scores.mat',{'weights':weights}) #+
print 'blobs score'
print np.unique(weights)
print_param(output)
out = net.blobs['score'].data[0].argmax(axis=0)
print out #+
#np.savetxt("vote", out, fmt="%02d")
np.savetxt("vote", out, fmt="%d")
print im.height
print im.width
print out.shape, len(out.shape)
def array2img(out):
out1 = np.array(out, np.unit8)
img = Image.fromarray(out1,'L')
for x in range(img.size[0]):
for y in range(img.size[1]):
if not img.getpixel((x, y)) == 0:
print 'PLz', str(img.getpixel((x, y)))
img.show()
def show_pred_img(file_name):
file = open(file_name, 'r')
lines = file.read().split('\n')
#img_name = str(sys.argv[1])
im = Image.open(img_name)
im_pixel = im.load()
img = Image.new('RGB', im.size, "black")
pixels = img.load()
w, h = 0, 0
for l in lines:
w = 0
if len(l) > 0:
word = l.split(' ')
for x in word:
if int(x) == 1:
pixels[w, h] = im_pixel[w, h]
w += 1
h += 1
print im.size
#img.show()
img.save(img_name+'_result.png')
show_pred_img('vote')
This the log information of inference:
the blobs
data (1, 1, 256, 256)
data_input_0_split_0 (1, 1, 256, 256)
data_input_0_split_1 (1, 1, 256, 256)
conv1_1 (1, 64, 454, 454)
conv1_2 (1, 64, 454, 454)
pool1 (1, 64, 227, 227)
conv2_1 (1, 128, 227, 227)
conv2_2 (1, 128, 227, 227)
pool2 (1, 128, 114, 114)
conv3_1 (1, 256, 114, 114)
conv3_2 (1, 256, 114, 114)
conv3_3 (1, 256, 114, 114)
pool3 (1, 256, 57, 57)
conv4_1 (1, 512, 57, 57)
conv4_2 (1, 512, 57, 57)
conv4_3 (1, 512, 57, 57)
pool4 (1, 512, 29, 29)
conv5_1 (1, 512, 29, 29)
conv5_2 (1, 512, 29, 29)
conv5_3 (1, 512, 29, 29)
pool5 (1, 512, 15, 15)
fc6 (1, 4096, 9, 9)
fc7 (1, 4096, 9, 9)
score_fr_sign (1, 5, 9, 9)
upscore_sign (1, 5, 320, 320)
score (1, 5, 256, 256)
--------------------------
the paramsters
conv1_1 (64, 1, 3, 3)
conv1_2 (64, 64, 3, 3)
conv2_1 (128, 64, 3, 3)
conv2_2 (128, 128, 3, 3)
conv3_1 (256, 128, 3, 3)
conv3_2 (256, 256, 3, 3)
conv3_3 (256, 256, 3, 3)
conv4_1 (512, 256, 3, 3)
conv4_2 (512, 512, 3, 3)
conv4_3 (512, 512, 3, 3)
conv5_1 (512, 512, 3, 3)
conv5_2 (512, 512, 3, 3)
conv5_3 (512, 512, 3, 3)
fc6 (4096, 512, 7, 7)
fc7 (4096, 4096, 1, 1)
score_fr_sign (5, 4096, 1, 1)
upscore_sign (5, 1, 64, 64)
--------------------------
the conv layer weights
[[[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]]
...
.
.
.
[[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]]]
--------------------------
the data blob
[[[[ 29.32040787 20.31391525 20.30148506 ..., 10.41113186 11.42486095
6.42949915]
[ 33.32374954 21.31280136 22.30037117 ..., 9.40779209 10.42189217
8.43079758]
[ 36.32300568 25.30816269 25.29183578 ..., 10.40148449 11.41818142
10.42838573]
...,
[ 34.64990616 31.65658569 30.65714264 ..., 4. 2.99981451
0.99962896]
[ 39.65788651 33.65769958 29.65974045 ..., 5.99981451 4.99944353
0.99888682]
[ 41.6641922 34.66493607 30.66567802 ..., 5.99962902 2.99907231
3.99833035]]]]
--------------------------
the conv1_1 blob
[[[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
...,
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]]]
--------------------------
the pool1 blob
[[[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
...,
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]]]
blobs fc6
[ 0.]
blobs fc7
[ 0.]
blobs score_fr_sign
[-1.61920226 -1.34294271 0.07809996 0.60521388 2.2788291 ]
blobs upscore_sign
[-1.61920238 -1.61920226 -1.61920214 ..., 2.27882886 2.2788291
2.27882934]
(5, 256, 256)
blobs score
[-1.61920238 -1.61920226 -1.61920214 -1.59390223 -1.59390211 -1.5689975
-1.54330218 -1.54330206 -1.51918805 -1.49270213 -1.49270201 -1.4709599
-1.46937859 -1.44210207 -1.44210196 -1.42273164 -1.41956913 -1.39150202
-1.3915019 -1.37608469 -1.37450349 -1.36975968 -1.34294283 -1.34294271
-1.3429426 -1.34090197 -1.34090185 -1.32943773 -1.32627523 -1.32195926
-1.31995022 -1.30130363 -1.2903018 -1.28437209 -1.2827909 -1.27999234
-1.27999222 -1.27804708 -1.27014089 -1.25999236 -1.23970175 -1.23930645
-1.23802543 -1.23802531 -1.23614395 -1.22981894 -1.22033143 -1.21999264
-1.21868122 -1.19605839 -1.19605827 -1.195822 -1.19424069 -1.18949699
-1.1891017 -1.18910158 -1.18159068 -1.17999291 -1.17736995 -1.17052197
-1.15409136 -1.15233755 -1.14917505 -1.14285004 -1.14130461 -1.13999307
-1.13850164 -1.13850152 -1.13605869 -1.13336253 -1.12071252 -1.11212444
-1.11043441 -1.1088531 -1.10410941 -1.10261631 -1.09999335 -1.09620309
-1.09474754 -1.08790159 -1.08790147 -1.08513427 -1.07090306 -1.07015753
-1.07015741 -1.06853116 -1.06536865 -1.06523943 -1.06392801 -1.05999362
-1.05904365 -1.05343628 -1.04955614 -1.03730154 -1.03730142 -1.03690612
-1.02820921 -1.02819049 -1.02786267 -1.02662802 -1.02523971 -1.0218842
-1.02109361 -1.0199939 -1.013978 -1.01212502 -1.00290918 -0.99179727
-0.99048585 -0.98867792 -0.98788732 -0.98670143 -0.98670137 -0.9865514
-0.98622358 -0.98622352 -0.98472482 -0.97999406 -0.97839981 -0.97128415
-0.97081381 -0.9689123 -0.95626229 -0.95573193 -0.95310903 -0.94914663
-0.94786316 -0.94756538 -0.9442566 -0.94425654 -0.94282162 -0.94044977
-0.93999434 -0.93491536 -0.92950261 -0.9238466 -0.92097807 -0.91966659
-0.9157322 -0.91040593 -0.90961534 -0.90917486 -0.90724343 -0.90228963
-0.90091842 -0.89999455 -0.89143091 -0.88819134 -0.88622415 -0.88360125
-0.8787809 -0.87835538 -0.87324655 -0.8716653 -0.87048656 -0.86692154
-0.86032271 -0.86032265 -0.85999483 -0.85901529 -0.85278171 -0.85147029
-0.84794647 -0.84753585 -0.84688014 -0.8409785 -0.83608711 -0.8329246
-0.83179826 -0.8265996 -0.81999505 -0.81933933 -0.81835574 -0.81835568
-0.81711209 -0.81671637 -0.81147051 -0.80556893 -0.80360168 -0.80050892
-0.79892766 -0.79418391 -0.79310995 -0.78720838 -0.78627765 -0.7858969
-0.78196251 -0.77999532 -0.77540517 -0.76622486 -0.76493073 -0.76176822
-0.75544322 -0.75507742 -0.75442165 -0.75245446 -0.7472086 -0.73933983
-0.73093385 -0.72935259 -0.72884804 -0.72460884 -0.72425795 -0.72294647
-0.71901208 -0.71245474 -0.70327443 -0.69693691 -0.6937744 -0.69343841
-0.69081551 -0.68556964 -0.67770082 -0.66452122 -0.66393042 -0.66293997
-0.66261894 -0.65868455 -0.65212721 -0.63442242 -0.63210559 -0.63179946
-0.6265536 -0.60622585 -0.60491437 -0.60127115 -0.60097998 -0.57802927
-0.57540637 -0.55114424 -0.54983276 -0.52425915 -0.49868551 0.02900147
0.03048873 0.03197598 0.03205225 0.03346324 0.03361578 0.03495049
0.0351793 0.03525557 0.03643775 0.03674283 0.03689536 0.037925
0.03830635 0.03853516 0.03861143 0.03941226 0.03986987 0.04017495
0.04032749 0.04089952 0.0414334 0.04181475 0.04204356 0.04211983
0.04238677 0.04299692 0.04345454 0.04375962 0.04387403 0.04391216
0.04456045 0.04509434 0.04536128 0.04547568 0.04570449 0.04578076
0.04612397 0.04673413 0.04684854 0.04719175 0.04749683 0.04759216
0.04764936 0.0476875 0.04837392 0.04890781 0.04925102 0.04928916
0.04951797 0.04959423 0.05001372 0.05003278 0.05003279 0.05062388
0.05108149 0.05138657 0.05153911 0.05165351 0.05233994 0.05247341
0.05247341 0.05287382 0.05325517 0.05348398 0.05356025 0.054056
0.05466616 0.05491403 0.05491403 0.05512378 0.05542885 0.05558139
0.05645849 0.05699238 0.05735466 0.05735466 0.05737372 0.05760253
0.0576788 0.05886098 0.05931859 0.05962367 0.05977621 0.05979528
0.05979528 0.06126347 0.06164481 0.06187363 0.06194989 0.0622359
0.06223591 0.06366596 0.06397104 0.06412357 0.06467653 0.06606845
0.06629726 0.06637353 0.06711715 0.06847093 0.06862348 0.06955777
0.06955778 0.07087342 0.0709497 0.0719984 0.0719984 0.07327592
0.07443902 0.07443903 0.0756784 0.07687964 0.07687965 0.07809995
0.07809996 0.07809997 0.22473885 0.23626392 0.24778898 0.24838002
0.25931406 0.26049611 0.27083912 0.27261221 0.27320322 0.28236419
0.28472832 0.28591037 0.29388925 0.29684439 0.29861748 0.29920852
0.30541432 0.3089605 0.31132463 0.31250668 0.31693938 0.3210766
0.32403174 0.32580483 0.32639587 0.32846448 0.33319271 0.33673888
0.33910298 0.33998954 0.34028506 0.34530881 0.349446 0.35151461
0.35240114 0.35417423 0.35476527 0.35742489 0.36215314 0.36303967
0.36569929 0.36806342 0.36880219 0.36880222 0.36924547 0.36954099
0.37486026 0.37899747 0.38165709 0.38195261 0.3837257 0.38431671
0.38756737 0.38771513 0.38771516 0.39229563 0.39584181 0.39820591
0.39938796 0.40027452 0.40559378 0.40662807 0.40973097 0.41268614
0.4144592 0.41505024 0.41889194 0.42362016 0.42554098 0.42554101
0.42716634 0.42953047 0.43071252 0.43750936 0.44164655 0.44445392
0.44445395 0.44460171 0.44637477 0.44696581 0.45612678 0.45967296
0.46203706 0.46321911 0.46336687 0.4633669 0.4747442 0.47769934
0.47947243 0.48006344 0.48227981 0.48227984 0.49336162 0.49572572
0.49690777 0.50119275 0.51197904 0.5137521 0.51434314 0.52010566
0.52010572 0.53059644 0.53177851 0.53901857 0.53901863 0.54921389
0.54980487 0.55793154 0.56783128 0.57684445 0.57684451 0.58644873
0.59575737 0.59575742 0.60521382 0.60521388 0.60521394 0.84621561
0.88961124 0.93300694 0.93523234 0.97640258 0.98085344 1.01979828
1.02647448 1.02869999 1.06319392 1.07209563 1.07654643 1.10658967
1.11771667 1.12439299 1.12661839 1.14998531 1.16333783 1.17223942
1.17669034 1.19338095 1.20895886 1.22008598 1.22676229 1.22898769
1.23677659 1.25458002 1.26793253 1.27683413 1.28017235 1.28128505
1.30020106 1.31577897 1.32356799 1.32690609 1.3335824 1.3358078
1.34582222 1.36362553 1.36696362 1.37697804 1.38587976 1.38866138
1.3886615 1.39033055 1.39144325 1.41147208 1.42704999 1.43706429
1.43817711 1.44485331 1.4470787 1.45931852 1.45987487 1.45987499
1.47712183 1.49047434 1.49937606 1.50382698 1.50716507 1.52719378
1.53108823 1.53108835 1.5427717 1.55389881 1.56057513 1.56280053
1.57726574 1.59506905 1.6023016 1.60230172 1.60842156 1.61732328
1.62177408 1.6473664 1.66294444 1.67351508 1.6735152 1.67407143
1.68074775 1.68297315 1.71746719 1.7308197 1.7397213 1.74417222
1.74472845 1.74472857 1.78756785 1.79869497 1.80537117 1.80759656
1.81594181 1.81594193 1.81594205 1.85766852 1.86657023 1.87102103
1.88715529 1.88715541 1.9277693 1.9344455 1.9366709 1.95836878
1.99786997 2.00232077 2.02958202 2.02958226 2.06797075 2.07019615
2.10079551 2.10079575 2.1380713 2.17200899 2.20817208 2.24322224
2.24322248 2.27882886 2.2788291 2.27882934]
256
256
(256, 256) 2
(256, 256)
I have two major questions:
I am wondering why the output is black? and
How can I know when to stop running the algorithm (i.e., iteration
number)? I really do not know what is the optimum iteration number and
loss value that I can stop fine tuning in that stage. I stopped
training in 40,000 iterations, I have no idea about this.
Is it necessary that the result of segmentation be a grayscale image
as well (like input), or creating RGB result image does not make any
difference in the output?
I really do not know how much I am doing the right way. Quite CONFUSED :(
Does anyone have any suggestion? I really appreciate your help.
Make sure that your labels' data type is uint8! I had the same problem!
Also make sure that you have weight filler like below in your prototxt before training!
layer {
name: "myupscore2"
type: "Deconvolution"
bottom: "myscore_fr"
top: "myupscore2"
param {
lr_mult: 5
}
convolution_param {
group :2
num_output: 2
weight_filler: { type: "bilinear" }
bias_term: false
kernel_size: 4
stride: 16
}
}
good luck!
Yes, it usually depends on your image size! Have you checked your data type? Both your images and groundtruths should be uint8!
Also have you add the "group" line to your Deconv layer?
Best
I have rather simple question but still couldnĀ“t make it work.
I want a block diagonal n^2*n^2 matrix. The blocks are sparse n*n matrices with just the diagonal, first off diagonals and forth off diag. For the simple case of n=4 this can easily be done
datanew = ones((5,n1))
datanew[2] = -2*datanew[2]
diagsn = [-4,-1,0,1,4]
DD2 = sparse.spdiags(datanew,diagsn,n,n)
new = sparse.block_diag([DD2,DD2,DD2,DD2])
Since this only useful for small n's, is there a way better way to use block_diag? Thinking of n -> 1000
A simple way of constructing a long list of DD2 matrices, is with a list comprehension:
In [128]: sparse.block_diag([DD2 for _ in range(20)]).A
Out[128]:
array([[-2, 1, 0, ..., 0, 0, 0],
[ 1, -2, 1, ..., 0, 0, 0],
[ 0, 1, -2, ..., 0, 0, 0],
...,
[ 0, 0, 0, ..., -2, 1, 0],
[ 0, 0, 0, ..., 1, -2, 1],
[ 0, 0, 0, ..., 0, 1, -2]])
In [129]: _.shape
Out[129]: (80, 80)
At least in my version, block_diag wants a list of arrays, not *args:
In [133]: sparse.block_diag(DD2,DD2,DD2,DD2)
...
TypeError: block_diag() takes at most 3 arguments (4 given)
In [134]: sparse.block_diag([DD2,DD2,DD2,DD2])
Out[134]:
<16x16 sparse matrix of type '<type 'numpy.int32'>'
with 40 stored elements in COOrdinate format>
This probably isn't the fastest way to construct such a block diagonal array, but it's a start.
================
Looking at the code for sparse.block_mat I deduce that it does:
In [145]: rows=[]
In [146]: for i in range(4):
arow=[None]*4
arow[i]=DD2
rows.append(arow)
.....:
In [147]: rows
Out[147]:
[[<4x4 sparse matrix of type '<type 'numpy.int32'>'
with 10 stored elements (5 diagonals) in DIAgonal format>,
None,
None,
None],
[None,
<4x4 sparse matrix of type '<type 'numpy.int32'>'
...
None,
<4x4 sparse matrix of type '<type 'numpy.int32'>'
with 10 stored elements (5 diagonals) in DIAgonal format>]]
In other words, rows is a 'matrix' of None with DD2 along the diagonals. It then passes these to sparse.bmat.
In [148]: sparse.bmat(rows)
Out[148]:
<16x16 sparse matrix of type '<type 'numpy.int32'>'
with 40 stored elements in COOrdinate format>
bmat in turn collects the data,rows,cols from the coo format of all the input matricies, joins them into master arrays, and builds a new coo matrix from them.
So an alternative is to construct those 3 arrays directly.