Matlab's Conv2 equivalent in OpenCV - c++

I have been trying to do Convolution of a 2D Matrix using OpenCV. I actually went through this code http://blog.timmlinder.com/2011/07/opencv-equivalent-to-matlabs-conv2-function/#respond but it yields correct answers only in positive cases. Is there a simple function like conv2 in Matlab for OpenCV or C++?
Here is an example:
A= [
1 -2
3 4
]
I want to convolve it with [-0.707 0.707]
And the result as by conv2 from Matlab is
-0.7071 2.1213 -1.4142
-2.1213 -0.7071 2.8284
Some function to compute this output in OpenCV or C++? I will be grateful for a response.

If you want an exclusive OpenCV solution, use cv2.filter2D function. But you should adjust the borderType flag if you want to get the correct output as that of matlab.
>>> A = np.array([ [1,-2],[3,4] ]).astype('float32')
>>> A
array([[ 1., -2.],
[ 3., 4.]], dtype=float32)
>>> B = np.array([[ 0.707,-0.707]])
>>> B
array([[ 0.707, -0.707]])
>>> cv2.filter2D(A2,-1,B,borderType = cv2.BORDER_CONSTANT)
array([[-0.70700002, 2.12100005, -1.41400003],
[-2.12100005, -0.70700002, 2.82800007]], dtype=float32)
borderType is important. To find the convolution you need values outside the array. If you want to get matlab like output, you need to pass cv2.BORDER_CONSTANT. See output is greater in size than input.

If you are using OpenCV with Python 2 binding you can use Scipy as long as your images will be ndarrays:
>>> from scipy import signal
>>> A = np.array([[1,-2], [3,4]])
>>> B = np.array([[-0.707, 0.707]])
>>> signal.convolve2d(A,B)
array([[-0.707, 2.121, -1.414],
[-2.121, -0.707, 2.828]])
Be sure that you use the full mode (which is set by default) if you want to achieve the same result as in matlab as long as if you use 'same' mode Scipy will center differently from Matlab.

Related

I want to deploy a pytorch segmentation model in a C++ application .. C++ equivalent preprocessing

I want to deploy a pytorch segmentation model in a C++ application. I knew that I have to convert the model to a Torch Script and use libtorch.
However, what is C++ equivalent to the following pre-preprocessing (It's Ok to convert opencv, but I don't know how to convert the others)?
import torch.nn.functional as F
train_tfms = transforms.Compose([transforms.ToTensor(), transforms.Normalize(channel_means, channel_stds)])
input_width, input_height = input_size[0], input_size[1]
img_1 = cv.resize(img, (input_width, input_height), cv.INTER_AREA)
X = train_tfms(Image.fromarray(img_1))
X = Variable(X.unsqueeze(0)).cuda() # [N, 1, H, W]
mask = model(X)
mask = F.sigmoid(mask[0, 0]).data.cpu().numpy()
mask = cv.resize(mask, (img_width, img_height), cv.INTER_AREA)
To create the transformed dataset, you will need to call MapDataset<DatasetType, TransformType> map(DatasetType dataset,TransformType transform) (see doc).
You will likely have to implement your 2 transforms yourself, just look at how they implemented theirs and imitate that.
The libtorch tutorial will guide you through datasets and dataloaders
You can call the sigmoid function with torch::nn::functionql::sigmoid I believe

Integrating an array in scipy with bounds.

I am trying to integrate over an array of data, but with bounds. Therfore I planned to use simps (scipy.integrate.simps). Because simps itself does not support bounds I decided to feed it only the selection of my data I want to integrate over. Yet this leads to strange results which are twice as big as the expected outcome.
What am I doing wrong, or what am I missing, or missunderstanding?
# -*- coding: utf-8 -*-
from scipy import integrate
from scipy import interpolate
import numpy as np
import matplotlib.pyplot as plt
# my data
x = np.linspace(-10, 10, 30)
y = x**2
# but I only want to integrate from 3 to 5
f = interpolate.interp1d(x, y)
x_selection = np.linspace(3, 5, 10)
y_selection = f(x_selection)
# quad returns the expected result
print 'quad', integrate.quad(f, 3, 5), '<- the expected value (includig error estimation)'
# but simps returns an uexpected result, when using the selected data
print 'simps', integrate.simps(x_selection, y_selection), '<- twice as big'
print 'trapz', integrate.trapz(x_selection, y_selection), '<- also twice as big'
plt.plot(x, y, marker='.')
plt.fill_between(x, y, 0, alpha=0.5)
plt.plot(x_selection, y_selection, marker='.')
plt.fill_between(x_selection, y_selection, 0, alpha=0.5)
plt.show()
Windows7, python2.7, scipy1.0.0
The Arguments for simps() and trapz() are in the wrong order.
You have flipped the calling arguments; simps and trapz expect first the y dimension, and second the x dimension, as per the docs. Once you have corrected this, similar results should obtain. Note that your example function admits a trivial analytic antiderivative, which would be much cheaper to evaluate.
– N. Wouda

how to read generator data as numpy array

def laser_callback(self, laserMsg):
cloud = self.laser_projector.projectLaser(laserMsg)
gen = pc2.read_points(cloud, skip_nans=True, field_names=('x', 'y', 'z'))
self.xyz_generator = gen
print(gen)
I'm trying to convert the laser data into pointcloud2 data, and then display them using matplotlib.pyplot. I tried traversing individual points in the generator but it takes a long time. Instead I'd like to convert them into a numpy array and then plot it. How do I go about doing that?
Take a look at some of these other posts which seem to answer the basic question of "convert a generator to an array":
How do I build a numpy array from a generator?
How to construct an np.array with fromiter
How to fill a 2D Python numpy array with values from a generator?
numpy fromiter with generator of list
Without knowing exactly what your generator is returning, the best I can do is provide a somewhat generic (but not particularly efficient) example:
#!/usr/bin/env -p python
import numpy as np
# Sample generator of (x, y, z) tuples
def my_generator():
for i in range(10):
yield (i, i*2, i*2 + 1)
i += 1
def gen_to_numpy(gen):
return np.array([x for x in gen])
gen = my_generator()
array = gen_to_numpy(gen)
print(type(array))
print(array)
Output:
<class 'numpy.ndarray'>
[[ 0 0 1]
[ 1 2 3]
[ 2 4 5]
[ 3 6 7]
[ 4 8 9]
[ 5 10 11]
[ 6 12 13]
[ 7 14 15]
[ 8 16 17]
[ 9 18 19]]
Again though, I cannot comment on the efficiency of this. You mentioned that it takes a long time to plot by reading points directly from the generator, but converting to a Numpy array will still require going through the whole generator to get the data. It would probably be much more efficient if the laser to pointcloud implementation you are using could provide the data directly as an array, but that is a question for the ROS Answers forum (I notice you already asked this there).

Replicate IDL 'smooth' in Python 2.7

I have been trying to work out how to replicate IDL's smooth function in Python and I just can't get anything like the same results. (Disclaimer: It is probably 10 years since I touched this kind of mathematical problem so it has been dumped to make way for information like where to find the cheapest local fuel). I am trying to code this:
smooth(b,w,/nan)
where b is a 2D float array containing NANs (zeros - missing data - have also been converted to NAN).
From the IDL documents, it appears smooth uses a boxcar, so from scipy.ndimage.filters I have tried:
bsmooth = uniform_filter(b, w)
I am aware that there are some fundamental differences here:
the default edge behaviour from IDL is "the end points are copied
from the original array to the result with no smoothing" whereas I
don't seem to have the option to do this with the uniform filter.
Treatment of the NaN elements. In IDL, the /nan keyword seems to
mean that where possible the NaN values will be filled by the result
of the other points in the window. If there are no valid points to
generate a result, by a MISSING keyword. I thought I could
approximate this behaviour following the smoothing using
scipy.interpolate's NearestNDInterpolator (thanks to the brilliant
explanation by Alex on here:
filling gaps on an image using numpy and scipy)
Here is my test array:
>>>b array([[ 0.97599638, 0.93114936, 0.87070072, 0.5379253 ],
[ 0.34873217, nan, 0.40985891, 0.22407863],
[ nan, nan, nan, 0.67532134],
[ nan, nan, 0.85441768, nan]])
My answers bore not the SLIGHTEST resemblance to IDL, whether I use the /nan keyword or not.
IDL> smooth(b,2,/nan)
0.97599638 0.93114936 0.87070072 0.53792530
0.34873217 0.70728749 0.60817236 0.22407863
NaN 0.53766960 0.54091913 0.67532134
NaN NaN 0.85441768 NaN
IDL> smooth(b,2)
0.97599638 0.93114936 0.87070072 0.53792530
0.34873217 -NaN -NaN 0.22407863
-NaN -NaN -NaN 0.67532134
-NaN -NaN 0.85441768 NaN
I confess I find the scipy documentation rather sparse on detail so I have no idea if I am really doing what I think I doing. The fact that the two python approaches which I believed would both smooth the image give different answers suggests that things are not what I understood them to be.
>>>uniform_filter(b, 2)
array([[ 0.97599638, 0.95357287, 0.90092504, 0.70431301],
[ 0.66236428, nan, nan, nan],
[ nan, nan, nan, nan],
[ nan, nan, nan, nan]])
I thought it was a bit odd it was so empty so I tried this with an array of 100 elements (still using a window of 2) and output the images. The results (first image is 'b' second is 'bsmooth') are not quite what I was hoping for:
Going back to the smaller array and following the examples in: http://scipy.github.io/old-wiki/pages/Cookbook/SignalSmooth which I thought would give the same output as uniform_filter, I tried:
>>> box = np.array([1,1,1,1])
>>> box = box.reshape(2,2)
>>> box
array([[1, 1],
[1, 1]])
>>> bsmooth = scipy.signal.convolve2d(b,box,mode='same')
>>> print bsmooth
[[ 0.97599638 1.90714574 1.80185008 1.40862602]
[ 1.32472855 nan nan 2.04256356]
[ nan nan nan nan]
[ nan nan nan nan]]
Obviously I have completely misunderstood the scipy functions, maybe even the IDL one. If anyone can help me to replicate the IDL smooth function as closely as possible, I would be extremely grateful. I am under considerable time pressure to get a solution for this that doesn't rely on IDL and I am tossing a coin to decide whether to code the function from scratch or develop a very contagious illness.
How can I perform the same smoothing in python?
First: Please use matplotlib.pyplot.imshow with interpolation="none" that's nicer to look at and maybe with greyscale.
So for your example: There is actually no convolution (filter) within scipy and numpy that treat's NaN as missing values (they propagate them within the convolution). At least I've found none so far and your boundary-treatement is also (to my knowledge) not implemented. But the boundary could be just replaced afterwards.
If you want to do convolution with NaN you can for example use astropy.convolution.convolve. There NaNs are interpolated using the kernel of your filter. But their convolution has some drawbacks as well: Border handling like you want isn't implemented there neither and your kernel must be of odd shape and the sum of your kernel must not be zero (or very close to it)
For example:
from astropy.convolution import convolve
import numpy as np
array = np.random.uniform(10,100, (4,4))
array[1,1] = np.nan
kernel = np.ones((3,3))
convolve(array, kernel)
as an example an initial array of
array([[ 97.19514587, 62.36979751, 93.54811286, 30.23567842],
[ 51.02184613, nan, 46.14769821, 60.08088041],
[ 20.86482452, 42.39661484, 36.96961278, 96.89180175],
[ 45.54453509, 76.61274347, 46.44485141, 25.40985372]])
will become:
array([[ 266.9009961 , 406.59680717, 348.69637399, 230.01236989],
[ 330.16243546, 506.82785931, 524.95440336, 363.87378443],
[ 292.75477064, 422.31693304, 487.26826319, 311.94469828],
[ 185.41871792, 268.83318211, 324.72547798, 205.71611967]])
if you want to "normalize" it, astropy offers the normalize_kernel parameter:
convolved = convolve(array, kernel, normalize_kernel=True)
array([[ 29.58753936, 42.09982189, 49.31793529, 33.00203873],
[ 49.87040638, 65.67695002, 66.10447436, 40.44026448],
[ 52.51126383, 63.03914444, 60.85474739, 35.88011742],
[ 39.40188443, 46.82350749, 40.1380926 , 22.46090152]])
If you want to replace the "edge" values with the ones from the original array just replace them:
convolved[0,:] = array[0,:]
convolved[-1,:] = array[-1,:]
convolved[:,0] = array[:,0]
convolved[:,-1] = array[:,-1]
So that's what the existing packages offer (as far as I know it). If you want to learn a bit of Cython or numba you can easily write your own convolutions that is not much slower (only a factor of 2-10) than the numpy/scipy ones but does EXACTLY what you want without messing around.
Since this is not something that is available in the python packages and because I saw the question asked several times during my research without satisfactory answers, here is how I solved the issue.
Provided is a test version of my function that I'm off to tidy up. I am sure there will be better ways to do the things I have done as I'm still fairly new to Python - please do recommend any appropriate changes.
Plots use autumn colourmap just because it allowed me to see the NaNs clearly.
My results:
IDL propagate
0.033369284 0.067915268 0.96602046 0.85623550
0.30435592 NaN NaN 100.00000
0.94065958 NaN NaN 0.90966976
0.018516513 0.044460904 0.051047217 NaN
python propagate
[[ 3.33692829e-02 6.79152655e-02 9.66020487e-01 8.56235492e-01]
[ 3.04355923e-01 nan nan 1.00000000e+02]
[ 9.40659566e-01 nan nan 9.09669768e-01]
[ 1.85165123e-02 4.44609040e-02 5.10472165e-02 nan]]
IDL replace
0.033369284 0.067915268 0.96602046 0.85623550
0.30435592 0.47452110 14.829881 100.00000
0.94065958 0.33833817 17.002417 0.90966976
0.018516513 0.044460904 0.051047217 NaN
python replace
[[ 3.33692829e-02 6.79152655e-02 9.66020487e-01 8.56235492e-01]
[ 3.04355923e-01 4.74521092e-01 1.48298812e+01 1.00000000e+02]
[ 9.40659566e-01 3.38338177e-01 1.70024175e+01 9.09669768e-01]
[ 1.85165123e-02 4.44609040e-02 5.10472165e-02 nan]]
My function:
#!/usr/bin/env python
# smooth.py
__version__ = 0.1
# Version 0.1 29 Feb 2016 ELH Test release
import numpy as np
import matplotlib.pyplot as mp
def Smooth(v1, w, nanopt):
# v1 is the input 2D numpy array.
# w is the width of the square window along one dimension
# nanopt can be replace or propagate
'''
v1 = np.array(
[[3.33692829e-02, 6.79152655e-02, 9.66020487e-01, 8.56235492e-01],
[3.04355923e-01, np.nan , 4.86013025e-01, 1.00000000e+02],
[9.40659566e-01, 5.23314093e-01, np.nan , 9.09669768e-01],
[1.85165123e-02, 4.44609040e-02, 5.10472165e-02, np.nan ]])
w = 2
'''
mp.imshow(v1, interpolation='None', cmap='autumn')
mp.show()
# make a copy of the array for the output:
vout=np.copy(v1)
# If w is even, add one
if w % 2 == 0:
w = w + 1
# get the size of each dim of the input:
r,c = v1.shape
# Assume that w, the width of the window is always square.
startrc = (w - 1)/2
stopr = r - ((w + 1)/2) + 1
stopc = c - ((w + 1)/2) + 1
# For all pixels within the border defined by the box size, calculate the average in the window.
# There are two options:
# Ignore NaNs and replace the value where possible.
# Propagate the NaNs
for col in range(startrc,stopc):
# Calculate the window start and stop columns
startwc = col - (w/2)
stopwc = col + (w/2) + 1
for row in range (startrc,stopr):
# Calculate the window start and stop rows
startwr = row - (w/2)
stopwr = row + (w/2) + 1
# Extract the window
window = v1[startwr:stopwr, startwc:stopwc]
if nanopt == 'replace':
# If we're replacing Nans, then select only the finite elements
window = window[np.isfinite(window)]
# Calculate the mean of the window
vout[row,col] = np.mean(window)
mp.imshow(vout, interpolation='None', cmap='autumn')
mp.show()
return vout

correlation in statsmodel using python

could u please help me to Find correlation for these two lists importing stats-model in python.
a=[1.0,2.0,3.0,2.0]
b=[789.0,786.0,788.0,785.0]
using some built-in functions
>>> import numpy as np
>>> a = np.array([1.0,2.0,3.0,2.0])
>>> b = np.array([789.0,786.0,788.0,785.0])
>>> np.corrcoef(a,b)
array([[ 1. , -0.2236068],
[-0.2236068, 1. ]])
Just use indexing to extract the right one:
np.corrcoef(a,b)[0,1]