I'm working on an image super-resolution problem (both 2D and 3D) using TensorFlow and am using SSIM as one of the eval_metrics.
I'm using image.ssim from TF and measure.comapre_ssim from skimage. Both of them are giving same results for 2D, but there's always a difference in results for 3D volumes.
I've looked into the source code for both TF-implementation and skimage-implemenation. There seems to be some fundamental differences in how the input images are considered and handled in the two implementations.
Code to replicate the issue:
import numpy as np
import tensorflow as tf
from skimage import measure
# For 2-D case
np.random.seed(12345)
a = np.random.random([32, 32, 64])
b = np.random.random([32, 32, 64])
a_ = tf.convert_to_tensor(a)
b_ = tf.convert_to_tensor(b)
ssim_2d_tf = tf.image.ssim(a_, b_, 1.0)
ssim_2d_sk = measure.compare_ssim(a, b, multichannel=True, gaussian_weights=True, data_range=1.0, use_sample_covariance=False)
print (tf.Session().run(ssim_2d_tf), ssim_2d_sk)
# For 3-D case
np.random.seed(12345)
a = np.random.random([32, 32, 32, 64])
b = np.random.random([32, 32, 32, 64])
a_ = tf.convert_to_tensor(a)
b_ = tf.convert_to_tensor(b)
ssim_3d_tf = tf.image.ssim(a_, b_, 1.0)
ssim_3d_sk = measure.compare_ssim(a, b, multichannel=True, gaussian_weights=True, data_range=1.0, use_sample_covariance=False)
s_3d_tf = tf.Session().run(ssim_3d_tf)
print (np.mean(s_3d_tf), ssim_3d_sk)
I have to take the mean of the output in case of 3D, as Tensorflow computes SSIM over last three dimensions, and hence results in 32 SSIM values. This suggests that TF considers images for SSIM in NHWC format. Is this good for SSIM over 3D volumes?
skimage however, seems to be using 1D Gaussian filters. So clearly even this is not considering depth in 3D volumes.
Can someone throw some light on these and help me in deciding which one to use further and why?
From a cursory look at the code, it seems that TensorFlow always computes a 2D SSIM, for each image in the batch and for each channel. It averages SSIM values across channels, and returns a value for each image in the batch. For TF, a 4D array is a collection of 2D images with multiple channels.
In contrast, SciKit-Image computes SSIM over all dimensions, except the last one if multichannel is set. So in the case of a 4D array, it computes a 3D SSIM for each channel and averages across channels.
This is consistent with your finding of similar results for a 3D array, but different results for a 4D array.
skimage however, seems to be using 1D Gaussian filters.
I’m not sure where you got this from, SciKit-Image uses an nD Gaussian in the case of a nD image. However, a Gaussian is a separable filter, meaning it can be efficiently implemented by n applications of a 1D filter.
Related
I have some raw images to debayer then apply colour corrections/transforms to. I use OpenCV and C++, and for the image sensor used the linear matrix coefficients are:
1.32 -0.46 0.14
-0.36 1.25 0.11
0.08 -1.96 1.88
I am not sure how to apply these to the image. It's not clear to me what I am supposed to do with them and why.
Can anyone explain what these colour reproduction or colour matrix values are, and how to use them to process an image?
Thank you!
Your question is not clear because it seems you also don't know what to do.
"what I am supposed to do with them"
First thing coming to my mind, you can convolve image with that matrix by using filter2D. According to documentation filter2D:
Convolves an image with the kernel.
The function applies an arbitrary linear filter to an image. In-place
operation is supported. When the aperture is partially outside the
image, the function interpolates outlier pixel values according to the
specified border mode.
Here is the example code snippet hpw tp use it:
Mat output;
Mat kernelMatrix = (Mat_<double>(3, 3) << 1.32, -0.46, 0.14,
-0.36, 1.25, 0.11,
0.08, -1.96, 1.88);
filter2D(rawImage, output, -1, kernelMatrix);
Before debayering you have an array B (-ayer) of MxN filtered "graylevel" values. They are physically filtered in the sense that the the number of photons measured by each one of them is affected by the color filter on top of each sensor site.
After debayering you have an array C (-olor) of MxNx3 BGR values, obtained by (essentially) reindexing the B array. However, each of the 3 values at a (row, col) image location represents 3 physical measurements. This is not the final image because we still need to "convert" the physical measurements to numbers that are representative of color channels as perceived by a human (or, more generally, by the intended user, which could also be some kind of image processing software). That is, the physical values need to be mapped to a color space.
The 3x3 "color correction" matrix you have represents one possible mapping - a simple linear one. You need to apply it in turn to each BGR triple at all (row, col) pixel locations. For example (in python/numpy/cv2):
import numpy as np
def colorCorrect(img, M):
"""Applies a color correction M to a BGR image img"""
rows, cols, depth = img.shape
assert depth == 3
assert M.shape == (3, 3)
img_corr = np.zeros((rows, cols, 3), dtype=img.dtype)
for r in range(rows):
for c in range(cols):
img_corr[r, c, :] = M.dot(img[r, c, :])
return img_corr
I am trying to understand unpooling in Pytorch because I want to build a convolutional auto-encoder.
I have the following code
from torch.autograd import Variable
data = Variable(torch.rand(1, 73, 480))
pool_t = nn.MaxPool2d(2, 2, return_indices=True)
unpool_t = nn.MaxUnpool2d(2, 2)
out, indices1 = pool_t(data)
out = unpool_t(out, indices1)
But I am constantly getting this error on the last line (unpooling).
IndexError: tuple index out of range
Although the data is simulated in this example, the input has to be of that shape because of the preprocessing that has to be done.
I am fairly new to convolutional networks, but I have even tried using a ReLU and convolutional 2D layer before the pooling however, the indices always seem to be incorrect when unpooling for this shape.
Your data is 1D and you are using 2D pooling and unpooling operations.
PyTorch interpret the first two dimensions of tensors as "batch dimension" and "channel"/"feature space" dimension. The rest of the dimensions are treated as spatial dimensions.
So, in your example, data is 3D tensor of size (1, 73, 480) and is interpret by pytorch as a single batch ("batch dimension" = 1) with 73 channels per sample and 480 samples.
For some reason MaxPool2d works for you and treats the channel dimension as a spatial dimension and sample this as well - I'm not sure this is a bug or a feature.
If you do want to sample along the second dimension you can add an additional dimension, making data a 4D tensor:
out, indices1 = pool_t(data[None,...])
In [11]: out = unpool_t(out, indices1, data[None,...].size())
I'm trying to build a generalized batch normalization function in Tensorflow.
I learn batch normalization in this article that i found very kind.
I have a problem with the dimensions of the scale and beta variables: In my case batch normalization is applied to each activations of each convolutional layer, thus if i have as output of the convolutional layer a tersor with size:
[57,57,96]
i need that scale and beta have same dimension as the convolutional layer output, correct?
here's my function, the program works but i don't know if is correct
def batch_normalization_layer(batch):
# Calculate batch mean and variance
batch_mean, batch_var = tf.nn.moments(batch, axes=[0, 1, 2])
# Apply the initial batch normalizing transform
scale = tf.Variable(tf.ones([batch.get_shape()[1],batch.get_shape()[2],batch.get_shape()[3]]))
beta = tf.Variable(tf.zeros([batch.get_shape()[1],batch.get_shape()[2],batch.get_shape()[3]]))
normalized_batch = tf.nn.batch_normalization(batch, batch_mean, batch_var, beta, scale, 0.0001)
return normalized_batch
from the documentation of tf.nn.batch_normalization:
mean, variance, offset and scale are all expected to be of one of two
shapes:
In all generality, they can have the same number of dimensions as the
input x, with identical sizes as x for the dimensions that are not
normalized over (the 'depth' dimension(s)), and dimension 1 for the
others which are being normalized over. mean and variance in this case
would typically be the outputs of tf.nn.moments(..., keep_dims=True)
during training, or running averages thereof during inference.
In the
common case where the 'depth' dimension is the last dimension in the
input tensor x, they may be one dimensional tensors of the same size
as the 'depth' dimension. This is the case for example for the common
[batch, depth] layout of fully-connected layers, and [batch, height,
width, depth] for convolutions. mean and variance in this case would
typically be the outputs of tf.nn.moments(..., keep_dims=False) during
training, or running averages thereof during inference.
With your values (scale=1.0 and offset=0) you can also just provide the value None.
I have this matlab code to display image object after do super spectrogram (stft, couple plca...)
t = z2 *stft_options.hop/stft_options.sr;
f = stft_options.sr*[0:size(spec_t,1)-1]/stft_options.N/1000;
max_val = max(max(db(abs(spec_t))));
imagesc(t, f, db(abs(spec_t)),[max_val-60 max_val]);
And get this result:
I was porting to C++ successfully by using Armadillo lib and get the mat results:
mat f,t,spec_t;
The problem is that I don't have any idea for converting bitmap like imagesc in matlab.
I searched and found this answer, but seems it doesn't work in my case because:
I use a double matrix instead of integer matrix, which can't be mark as bitmap color
The imagesc method take 4 parameters, which has the bounds with vectors x and y
The imagesc method also support scale ( I actually don't know how it work)
Does anyone have any suggestion?
Update: Here is the result of save method in Armadillo. It doesn't look like spectrogram image above. Do I miss something?
spec_t.save("spec_t.png", pgm_binary);
Update 2: save spectrogram with db and abs
mat spec_t_mag = db(abs(spec_t)); // where db method: m = 10 * log10(m);
mag_spec_t.save("mag_spec_t.png", pgm_binary);
And the result:
Armadillo is a linear algebra package, AFAIK it does not provide graphics routines. If you use something like opencv for those then it is really simple.
See this link about opencv's imshow(), and this link on how to use it in a program.
Note that opencv (like most other libraries) uses row-major indexing (x,y) and Armadillo uses column-major (row,column) indexing, as explained here.
For scaling, it's safest to convert to unsigned char yourself. In Armadillo that would be something like:
arma::Mat<unsigned char> mat2=255*(mat-mat.min())/(mat.max()-mat.min());
The t and f variables are for setting the axes, they are not part of the bitmap.
For just writing an image you can use Armadillo. Here is a description on how to write portable grey map (PGM) and portable pixel map (PPM) images. PGM export is only possible for 2D matrices, PPM export only for 3D matrices, where the 3rd dimension (size 3) are the channels for red, green and blue.
The reason your matlab figure looks prettier is because it has a colour map: a mapping of every value 0..255 to a vector [R, G, B] specifying the relative intensity of red, green and blue. A photo has an RGB value at every point:
colormap(gray);
x=imread('onion.png');
imagesc(x);
size(x)
That's the 3rd dimension of the image.
Your matrix is a 2d image, so the most natural way to show it is as grey levels (as happened for your spectrum).
x=mean(x,3);
imagesc(x);
This means that the R, G and B intensities jointly increase with the values in mat. You can put a colour map of different R,G,B combinations in a variable and use that instead, i.e. y=colormap('hot');colormap(y);. The variable y shows the R,G,B combinations for the (rescaled) image values.
It's also possible to make your own colour map (in matlab you can specify 64 R, G, and B combinations with values between 0 and 1):
z[63:-1:0; 1:2:63 63:-2:0; 0:63]'/63
colormap(z);
Now for increasing image values, red intensities decrease (starting from the maximum level), green intensities quickly increase then decrease, and blue values increase from minuimum to maximum.
Because PPM appears (I don't know the format) not to support colour maps, you need to specify the R,G,B values in a 3D array. For a colour order similar to z you would neet to make a Cube<unsigned char> c(ysize, xsize, 3) and then for every pixel y, x in mat2, do:
c(y,x,0) = 255-mat2(y,x);
c(y,x,1) = 255-abs(255-2*mat2(y,x));
x(y,x,2) = mat2(y,x)
or something very similar.
You may use SigPack, a signal processing library on top of Armadillo. It has spectrogram support and you may save the plot to a lot of different formats (png, ps, eps, tex, pdf, svg, emf, gif). SigPack uses Gnuplot for the plotting.
I am trying to change the RGB for the overall image for a project. Currently I am working with a test file before I apply it to the actual Image. I want to test different values of RGB but would first like to start with the mean of all three. How would I go about doing this? I have other modules installed such as scipy, numpy, matplotlib, etc if those are needed. Thanks
from PIL import Image, ImageFilter
test = Image.open('/Users/MeganRCunninghan/Pictures/4th-of-July-Wallpaper.ppm')
test.show()
test.getrgb()
Assuming your image is stored as a numpy.ndarray (Test this with print type(test))...
Your image will be represented by an NxMx3 array. Basically this means you have a N by M image with a color depth of 3- your RGB values. Taking the mean of those 3 will leave you with an NxMx1 array, where the 1 is now the average intensity. Numpy does this very well:
test = test.mean(2)
The parameter given, 2, specifies the dimension to take the mean along. It could be either 0, 1, or 2, because your image matrix is 3 dimensional. This should return an NxM array. You basically will be left with a gray-scale, (color depth of 1) image. Try to show the value that gets returned! If you get Nx3 or Mx3, you know you have just taken the average along the wrong axis. Note that you can check the dimensions of a numpy array with:
test.shape
Shape will be a tuple describing the dimensions of your image.