How to merge 2 gray-scale images in Python with OpenCV - python-2.7

I want to merge 2 one-channel, gray-scale images with OpenCv merge method. It is the code below:
...
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
zeros = numpy.zeros(img_gray.shape)
merged = cv2.merge([img_gray, zeros])
...
The problem is that gray-scale image doesn't have depth attribute that should be 1 and merge function require the same size of images and the same depth. I get error:
error: /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/convert.cpp:296: error: (-215) mv[i].size == mv[0].size && mv[i].depth() == depth in function merge
How can i merge this arrays?

Solved, i had to change dtype of img_gray from uint8 to float64
img_gray = numpy.float64(img_gray)
OpenCV Version 2.4.11
import numpy as np
# Load the image
img1 = cv2.imread(paths[0], cv2.IMREAD_UNCHANGED)
# could also use cv2.split() but per the docs (link below) it's time consuming
# split the channels using Numpy indexing, notice it's a zero based index unlike MATLAB
b = img1[:, :, 0]
g = img1[:, :, 1]
r = img1[:, :, 2]
# to avoid overflows and truncation in turn, clip the image in [0.0, 1.0] inclusive range
b = b.astype(np.float)
b /= 255
manipulate the channels ... in my case, adding Gaussian noise to blue channel ( b => b1 )
b1 = b1.astype(np.float)
g = g.astype(np.float)
r = r.astype(np.float)
# gotcha : notice the parameter is an array of channels
noisy_blue = cv2.merge((b1, g, r))
# store the outcome to disk
cv2.imwrite('output/NoisyBlue.png', noisy_blue)
N.B.:
Alternatively, you may also use np.double instead np.float in astype for type casting
Open CV Documentation Link

Related

Why cannot I find the cvPoint2D32f function in opencv?

I am using Python 2.7.13 |Anaconda 4.3.1 (64-bit) and OpenCv '2.4.13.2'.
I am trying to apply geometrical transformation to images for which I need to use
CvPoint2D32f center = cvPoint2D32f(x, y)
but I am not able to find this function, is this not available in python or are deprecated.
The type CvPoint2D32f is an older/deprecated type. OpenCV 2 introduced the type Point2f to replace it. Regardless, you don't need that type in Python. What you likely need is a numpy array with the dtype = np.float32. For points, the array should be constructed like:
points = np.array([ [[x1, y1]], ..., [[xn, yn]] ], dtype=np.float32)
You won't always need to set the dtype, as some functions (like cv2.findHomography() for example) will take integers.
For an example of these points being used, with the images from this tutorial, we could do the following to find and apply an homography to an image:
import cv2
import numpy as np
src = cv2.imread('book2.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]], dtype=np.float32)
dst = cv2.imread('book1.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]], dtype=np.float32)
transf = cv2.getPerspectiveTransform(pts_src, pts_dst)
warped = cv2.warpPerspective(src, transf, (dst.shape[1],dst.shape[0]))
alpha = 0.5
beta = 1 - alpha
blended = cv2.addWeighted(warped, alpha, dst, beta, 1.0)
cv2.imshow("Blended Warped Image", blended)
cv2.waitKey(0)
Which will result in the following image:

Python 2.7- how to save a picture through tuple or set values?

So I've been using opencv2 and PIL to get pixel values. They're saved as such
(0, 0, 0), (1, 1, 1)
I've tried like 7 different ways to use this data to create an image.
My biggest problem is I can't seem to get putdata to work with my tuple.
I would show code, but my laptop is flat and my code is broken anyway.
Tldr: how to save an image with PIL using pixel values stored in a tuple?
Something like this should work
from PIL import Image
W = 200
H = 200
img = Image.new("RGB", (W, H))
pixel_list = [(i%256,i%256,i%256) for i in range(W*H)]
i_pixel = 0
for x in range(W):
for y in range(H):
img.putpixel((x, y), pixel_list[i_pixel])
i_pixel += 1
img.save('result.png')
With the following result
Note: I read here the following:
In 1.1.6, the above is better written as:
pix = im.load()
for i in range(n):
...
pix[x, y] = value
But I couldn't get that to work.

Python cv2 Image Pyramids

Trying to implement the famous Orange/Apple pyramids blending (cv2 Image Pyramids).
Note: Both images shape is 307x307.
However, since the result image is blurred due to clipping values in cv2.subtract and cv2.add (as stated in cv2 vs numpy Matrix Arithmetics), I have used numpy arithmetics instead as suggested in StackOverflow: Reconstructed Image after Laplacian Pyramid Not the same as original image.
I have tested this by performing pyramids on one image and the result image constructed back using pyramids has the same Max,Min,Average pixels values as opposed to using cv2 arithmetics.
However, on pyramids level 7, the result image gets a 'noise' of a red dot and on level 9 the result image gets a lot of green pixels noises. Images of levels 6, 7, 9 - Imgur Album.
Any ideas why would this happen? The pyramid level 9 green noise I would say happened because the image went below 1x1 shape. But what about the red dot on 7 level pyramid?
EDIT : Code Added
numberOfPyramids = 9
# generate Gaussian pyramids for A and B Images
GA = A.copy()
GB = B.copy()
gpA = [GA]
gpB = [GB]
for i in xrange(numberOfPyramids):
GA = cv2.pyrDown(GA)
GB = cv2.pyrDown(GB)
gpA.append(GA)
gpB.append(GB)
# generate Laplacian Pyramids for A and B Images
lpA = [gpA[numberOfPyramids - 1]]
lpB = [gpB[numberOfPyramids - 1]]
for i in xrange(numberOfPyramids - 1, 0, -1):
geA = cv2.pyrUp(gpA[i], dstsize = np.shape(gpA[i-1])[:2])
geB = cv2.pyrUp(gpB[i], dstsize = np.shape(gpB[i-1])[:2])
laplacianA = gpA[i - 1] - geA if i != 1 else cv2.subtract(gpA[i-1], geA)
laplacianB = gpB[i - 1] - geB if i != 1 else cv2.subtract(gpB[i-1], geB)
lpA.append(laplacianA)
lpB.append(laplacianB)
# Now add left and right halves of images in each level
LS = []
for la, lb in zip(lpA, lpB):
_, cols, _ = la.shape
ls = np.hstack((la[:, : cols / 2], lb[:, cols / 2 :]))
LS.append(ls)
# now reconstruct
ls_ = LS[0]
for i in xrange(1, numberOfPyramids):
ls_ = cv2.pyrUp(ls_, dstsize = np.shape(LS[i])[:2])
ls_ = ls_ + LS[i] if i != numberOfPyramids - 1 else cv2.add(ls_, LS[i])
cv2.imshow(namedWindowName, ls_)
cv2.waitKey()
After read the original article about laplacian pyramid, I find I misunderstood this method, we can fully reconstruct the original image without blur, because we use of additional pix information. And It is true that clipping value lead to blurred. Well now we come back to the beginning again :)
So the code you post is still clipping value, I advise you use int16 to save the laplacian pyramid, and not use cv2.subtract. Hope it works.

Align depth image to RGB image

I am trying to generate a point cloud using images captured by Kinect with Python and libfreenect, but I couldn't align the depth data to RGB data taken by Kinect.
I applied Nicolas Burrus's equation but the two images turned further away, is there something wrong with my code:
cx_d = 3.3930780975300314e+02
cy_d = 2.4273913761751615e+02
fx_d = 5.9421434211923247e+02
fy_d = 5.9104053696870778e+02
fx_rgb = 5.2921508098293293e+02
fy_rgb = 5.2556393630057437e+02
cx_rgb = 3.2894272028759258e+02
cy_rgb = 2.6748068171871557e+02
RR = np.array([
[0.999985794494467, -0.003429138557773, 0.00408066391266],
[0.003420377768765,0.999991835033557, 0.002151948451469],
[-0.004088009930192, -0.002137960469802, 0.999989358593300 ]
])
TT = np.array([ 1.9985242312092553e-02, -7.4423738761617583e-04,-1.0916736334336222e-02 ])
# uu, vv are indices in depth image
def depth_to_xyz_and_rgb(uu , vv):
# get z value in meters
pcz = depthLookUp[depths[vv , uu]]
# compute x,y values in meters
pcx = (uu - cx_d) * pcz / fx_d
pcy = (vv - cy_d) * pcz / fy_d
# apply extrinsic calibration
P3D = np.array( [pcx , pcy , pcz] )
P3Dp = np.dot(RR , P3D) - TT
# rgb indexes that P3D should match
uup = P3Dp[0] * fx_rgb / P3Dp[2] + cx_rgb
vvp = P3Dp[1] * fy_rgb / P3Dp[2] + cy_rgb
# return a point in point cloud and its corresponding color indices
return P3D , uup , vvp
Is there anything I did wrong? Any help is appreciated
First, check your calibration numbers. Your rotation matrix is approximately the identity and, assuming your calibration frame is metric, your translation vector says that the second camera is 2 centimeters to the side and one centimeter displaced in depth. Does that approximately match your setup? If not, you may be working with the wrong scaling (likely using a wrong number for the characteristic size of your calibration target - a checkerboard?).
Your code looks correct - you are re-projecting a pixel of the depth camera at a known depth, and the projecting it back in the second camera to get at the corresponding rgb value.
One think I would check is whether your using your coordinate transform in the right direction. IIRC, OpenCV produces it as [R | t], but you are using it as [R | -t], which looks suspicious. Perhaps you meant to use its inverse, which would be [R' | -R'*t ], where I use the apostrophe to mean transposition.

Color image segmentation with Python

I have many pictures as below:
My objective is to identify those "beads", try to mark it with a circle, and count the detected numbers.
I tried to use image segmentation algorithms via Python and the source codes are as below:
from matplotlib import pyplot as plt
from skimage import data
from skimage.feature import blob_dog, blob_log, blob_doh
from math import sqrt
from skimage.color import rgb2gray
from scipy import misc # try
image = misc.imread('test.jpg')
image_gray = rgb2gray(image)
blobs_log = blob_log(image_gray, max_sigma=10, num_sigma=5, threshold=.1)
# Compute radii in the 3rd column.
blobs_log[:, 2] = blobs_log[:, 2] * sqrt(2)
blobs_dog = blob_dog(image_gray, max_sigma=2, threshold=.051)
blobs_dog[:, 2] = blobs_dog[:, 2] * sqrt(2)
blobs_doh = blob_doh(image_gray, max_sigma=2, threshold=.01)
blobs_list = [blobs_log, blobs_dog, blobs_doh]
colors = ['yellow', 'lime', 'red']
titles = ['Laplacian of Gaussian', 'Difference of Gaussian',
'Determinant of Hessian']
sequence = zip(blobs_list, colors, titles)
for blobs, color, title in sequence:
fig, ax = plt.subplots(1, 1)
ax.set_title(title)
ax.imshow(image, interpolation='nearest')
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=color, linewidth=2, fill=False)
ax.add_patch(c)
plt.show()
The best results obtained so far are still unsatisfactory:
How can I improve it ?
You could use Gimp or Photoshop and test some filters and colors changes to differentiate the circles from the background. Brightness and Contrast adjustments may work. Then you can apply an Edge detector to detect the circles.
by converting this image to grayscale you have effectively thrown away the most powerful cue you have to segment the beads - their distinctive green color. try running the same code but replace
image_gray = rgb2gray(image)
with
image_gray = image[:,:,1]