I`ve a 100*100 binary matrix the probability of each pixel given by this relation :
i want to know how to calculate the entropy of this image .
According to the given conditions, the probability for each entry in the matrix can be calculated accordingly.
For example, because s(1,1)=0, s(1,2)=0, s(2,1)=0, then you can calculate P(s(2,2)=0), and P(s(2,2)=1) using the theorem.
After you calculate all the probability for all entries, then you can start to calculate the entropy of this image by calculating the expected value.
Entropy is given by the formula: -Sum(PlogP)
Where log is the base 2 logarithm and P is the probability of the information.
Related
I want to calculate the histogram for Variance local binary pattern of a gray scale image using OpenCV C++.
Can someone explain me how to exactly find histogram for variance LBP in OpenCV C++ and what exactly it means?
Also please provide some links that are useful in this case.
VAR is a rotation invariant measure of local variance (have a look at this paper for a more in-depth explanation) defined as:
where P is the number of pixels in the local neighbourhood and μ is the average intensity computed across the local neighbourhood.
LBP variance (LBPV) is a texture descriptor that uses VAR as an adaptive weight to adjust the contribution of the LBP code in histogram calculation (see this paper for details). The value of the kth bin of the LBPV histogram can be expressed as:
where N and M are the number of rows and columns of the image, respectively, and w is given by:
According to this answer the code for calculating LBP using OpenCV is not available for public use, but here you can find a workaround to make that function accesible.
In OpenCV I'm using cartToPolar function and I want to know the max possible value of calculated magnitude. Can it be greater than 255? I found something about calculation in the documentation about Hough Line Transform, but I still do not know the max value. I need this to calculate histograms, to devide the ranges for right buckets.
Best regards,
I am trying to decompose an image using various wavelets ,Daubechies,Coif,symlet,ortho of all orders. Except db1(Haar), others produce some negative coefficients in approximation band. My understanding is approximation band contains the average values of the original image and hence should contain only positive values. Does it also depend on filter coefficients used for decomposition? I implemented decomposition using dwt2 command as well as using circular convolution with filter coefficients. Both produce same results for higher order wavelet filters.
I want to extract features from wavelet coefficients,negative coefficients may result in wrong feature values hence want to clarify.
Thanks.
Yes, the approximation band also depends on filter coefficients used for decomposition. More precisely, this situation is completely valid for low-pass decomposition filters having negative coefficients. If you require only positive coefficients in the approximation band, use one of these wavelets in MATLAB: rbio1.x, rbio2.x, rbio3.x.
After applying a Fourier transform to a signal, the energy of a single sine wave is often spread out across multiple bins (aka smearing). Have a look at the right side of the image below for an illustration:
I want to extract a list of peak frequencies. Just finding the highest bin is easy. But after that smearing becomes a problem.
I would like to have a heuristic which tells me if the magnitude of a specific bin is possibly the result of smearing or if there has to be another peak frequency in order to explain the signal. (It is better if I miss some than have false positives)
My naive approach would be to just calculate a few thousand examples and take the maximum of these to get an envelope curve so that any smearing is likely below that envelope.
But is there a better way to do this?
The FFT result of any rectangularly windowed pure unmodulated sinusoid is a Sinc function. This Sinc (sin(pix)/(pix)) function is only zero for all bins except one (at the peak magnitude) when the frequency of the input sinusoid has exactly an integer number of periods in the FFT width.
For all other frequencies that are not at the exact center of an FFT result bin, if you know that exact frequency and magnitude (which won't be in any single FFT bin), you can calculate all the other bins by sampling the Sinc function.
And, of course, if the input sinusoid isn't perfectly pure, but modulated in any way (amplitude, frequency or phase), this modulation will produce various sidebands in the FFT result as well.
I have 2D data (I have a zero mean normalized data). I know the covariance matrix, eigenvalues and eigenvectors of it. I want to decide whether to reduce the dimension to 1 or not (I use principal component analysis, PCA). How can I decide? Is there any methodology for it?
I am looking sth. like if you look at this ratio and if this ratio is high than it is logical to go on with dimensionality reduction.
PS 1: Does PoV (Proportion of variation) stands for it?
PS 2: Here is an answer: https://stats.stackexchange.com/questions/22569/pca-and-proportion-of-variance-explained does it a criteria to test it?
PoV (Proportion of variation) represents how much information of data will remain relatively to using all of them. It may be used for that purpose. If POV is high than less information will be lose.
You want to sort your eigenvalues by magnitude then pick the highest 1 or 2 values. Eigenvalues with a very small relative value can be considered for exclusion. You can then translate data values and using only the top 1 or 2 eigenvectors you'll get dimensions for plotting results. This will give a visual representation of the PCA split. Also check out scikit-learn for more on PCA. Precisions, recalls, F1-scores will tell you how well it works
from http://sebastianraschka.com/Articles/2014_pca_step_by_step.html...
Step 1: 3D Example
"For our simple example, where we are reducing a 3-dimensional feature space to a 2-dimensional feature subspace, we are combining the two eigenvectors with the highest eigenvalues to construct our d×kd×k-dimensional eigenvector matrix WW.
matrix_w = np.hstack((eig_pairs[0][1].reshape(3,1),
eig_pairs[1][1].reshape(3,1)))
print('Matrix W:\n', matrix_w)
>>>Matrix W:
[[-0.49210223 -0.64670286]
[-0.47927902 -0.35756937]
[-0.72672348 0.67373552]]"
Step 2: 3D Example
"
In the last step, we use the 2×32×3-dimensional matrix WW that we just computed to transform our samples onto the new subspace via the equation
y=W^T×x
transformed = matrix_w.T.dot(all_samples)
assert transformed.shape == (2,40), "The matrix is not 2x40 dimensional."