I am new to Python and have been following a basic tutorial (https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html#py-depthmap) for creating a disparity map from two images, but I have had several errors.
I am using Python 2.7, OpenCV 3.3.0, matplotlib 1.3, numpy 1.10.2
This is my code v1:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread("C:\Python27\tsukuba_l.png,0")
imgR = cv2.imread("C:\Python27\tsukuba_r.png,0")
stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgL,imgR)
plt.imshow(disparity,'gray')
plt.show()
I corrected the stereoBM function from the tutorial to match the latest openCV version cv2.createStereoBM to cv2.StereoBM_create and got an error. (-211) SADWindowSize must be odd, be within 5..255 and not be larger than image width or height in function on the 2nd to last line (disparity=...). I tried reducing the block size but there was still an error, I have checked the image pathways are correct and both images are the same size.
I then have attempted to use StereoSGBM_create instead, v2 code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread("C:\Python27\tsukuba_l.png,0")
imgR = cv2.imread("C:\Python27\tsukuba_r.png,0")
stereo = cv2.StereoSGBM_create(minDisparity=5, numDisparities=16, blockSize=5)
disparity = stereo.compute(imgL,imgR)
plt.imshow(disparity,'gray')
plt.show()
However this returns:
TypeError: Image data can not convert to float.
Any reason why these errors maybe occurring?
Typo error :
change
imgL = cv2.imread("C:\Python27\tsukuba_l.png,0")
imgR = cv2.imread("C:\Python27\tsukuba_r.png,0")
to
imgL = cv2.imread("C:\Python27\tsukuba_l.png",0)
imgR = cv2.imread("C:\Python27\tsukuba_r.png",0)
or better :
imgL = cv2.imread("C:\Python27\tsukuba_l.png",cv2.IMREAD_GRAYSCALE)
imgR = cv2.imread("C:\Python27\tsukuba_r.png",cv2.IMREAD_GRAYSCALE)
from cv2.imread
Warning Even if the image path is wrong, it won’t throw any error,
Related
I am using imageio to write png images to file.
import numpy as np
import matplotlib.cm as cm
import imageio # for saving the image
import matplotlib as mpl
hm_colors = ['blue', 'white','red']
cmap = mpl.colors.LinearSegmentedColormap.from_list('bwr', hm_colors)
data = np.array([[1,2,3],[5,6,7]])
norm = mpl.colors.Normalize(vmin=-3, vmax=3)
colormap = cm.ScalarMappable(norm=norm, cmap=cmap)
im = colormap.to_rgba(data)
# scale the data to a width of w pixels
im = np.repeat(im, w, axis=1)
im = np.repeat(im, h, axis=0)
# save the picture
imageio.imwrite("my_img.png", im)
This process is performed automatically and I noticed some Error messages saying:
Error closing: 'Image' object has no attribute 'fp'.
Before this message I get warning:
/usr/local/lib/python2.7/dist-packages/imageio/core/util.py:78: UserWarning: Lossy conversion from float64 to uint8, range [0, 1] dtype_str, out_type.__name__))
However, the images seem to be generated and saved just fine.
I can't find data to recreate this message.
Any idea why I get this error and why it doesn't noticeably affect the results? I don't use PIL.
One possible reason could come from using this in Celery.
Thanks!
L.
I encountered the same issue using imageio.imwrite in Python 3.5. It's a fairly harmless except for the fact that that it's stopping garbage collection and leading to excessive memory usage when writing thousands of images. The solution was to use the PIL module, which is a dependency of imageio. The last line of your code should read:
from PIL import Image
image = Image.fromarray(im)
image.save('my_img.png')
I'm tying to use Keras for image recognition, but kept getting errors like:
ValueError: Error when checking input: expected input_9 to have 4 dimensions, but got array with shape (100, 300, 300)
I tried to change values for params that relate to dimensions, also tried to reshape images, but still got errors.
In fact, I don't understand why did I get this error. Why it expects 4 dimensions?
Here's my code:
import os
import numpy as np
import pandas as pd
import scipy
import sklearn
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Convolution2D, Flatten, MaxPooling2D, Reshape, InputLayer
import cv2
from skimage import io
import urllib2
from PIL import Image
import numpy as np
%matplotlib inline
I chose 50 rose images and 50 sunflower images from imagenet:
rose_file = "http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n04971313"
sunflower_file = "http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n11978713"
images = []
image_num = 50
rose_urls = urllib2.urlopen(rose_file)
rose_ct = 0
for rose_url in rose_urls:
try:
resp = urllib2.urlopen(rose_url)
rose_image = np.asarray(bytearray(resp.read()), dtype="uint8")
images.append(rose_image)
rose_ct += 1
if rose_ct == image_num: # only use 50 images here, otherwise, loading time is too long
break
except: # some images are no longer available
pass
sunflower_urls = urllib2.urlopen(sunflower_file)
sunflower_ct = 0
for sunflower_url in sunflower_urls:
try:
resp = urllib2.urlopen(sunflower_url)
sunflower_image = np.asarray(bytearray(resp.read()), dtype="uint8")
images.append(sunflower_image)
sunflower_ct += 1
if sunflower_ct == image_num: # only use 50 images here, otherwise, loading time is too long
break
except: # some images are no longer available
pass
Resize training images to 300*300:
from keras.utils.np_utils import to_categorical
for i in range(len(images)):
images[i]=cv2.resize(np.array(images[i]),(300,300))
images = np.array(images)
labels = [0 for i in range(image_num)]
labels.extend([1 for j in range(image_num)])
labels = np.array(labels)
labels = to_categorical(labels)
Build the model:
filters=10
filtersize=(5,5)
epochs=7
batchsize=128
input_shape=(300,300, 3)
model = Sequential()
model.add(keras.layers.InputLayer(input_shape=input_shape))
model.add(keras.layers.convolutional.Conv2D(filters, filtersize, strides=(1, 1),
padding='valid', data_format="channels_last", activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(units=2, input_dim=10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(images, labels, epochs=epochs, batch_size=batchsize, validation_split=0.3)
model.summary()
Here, I tried to change input_shape=(300,300, 3) into input_shape=(300,300, 3, 0), hoping this means 4 dimensions, but got errors saying:
Input 0 is incompatible with layer conv2d_13: expected ndim=4, found ndim=5
Do you know why did I get these errors? And how to deal with this problem?
I modify the FCN net and design a new net,in which I use two ImageData Layer as input param and hope the net produces a picture as output.
here is the train_val.prototxt and the deploy.prototxt
the original picture and the label are both gray scale pics and sizes are 224*224.
I've trained a caffemodel and use infer.py to use the caffemodel to do a segmentation,but meet the error:
F0505 06:15:08.072602 30713 net.cpp:767] Check failed: target_blobs.size() == source_layer.blobs_size() (2 vs. 1) Incompatible number of blobs for layer conv1
here is the infer.py file:
import numpy as np
from PIL import Image
caffe_root = '/home/zhaimo/'
import sys
sys.path.insert(0, caffe_root + 'caffe-master/python')
import caffe
im = Image.open('/home/zhaimo/fcn-master/data/vessel/test/13.png')
in_ = np.array(im, dtype=np.float32)
#in_ = in_[:,:,::-1]
#in_ -= np.array((104.00698793,116.66876762,122.67891434))
#in_ = in_.transpose((2,0,1))
net = caffe.Net('/home/zhaimo/fcn-master/mo/deploy.prototxt', '/home/zhaimo/fcn-master/mo/snapshot/train/_iter_200000.caffemodel', caffe.TEST)
net.blobs['data'].reshape(1, *in_.shape)
net.blobs['data'].data[...] = in_
net.forward()
out = net.blobs['score'].data[0].argmax(axis=0)
plt.axis('off')
plt.savefig('/home/zhaimo/fcn-master/mo/result/13.png')
how to solve this problem?
The problem is with your bias term in conv1. In your train.prototxt it is set to false. But in your deploy.prototxt it is not and by default that is true. That is why weight loader is looking for two blobs.
I am trying to use background Substractor module in opencv. I am referring this blog. I am not able to use it because I again and again get the error message 'module' object has no attribute 'createBackgroundSubtractorMOG' , I have go-ogled through all the answers to this problem and I have tried using all the substrings possible like - createBackgroundSubtractor , BackgroundSubtractor , createBackgroundSubtractorMOG2 etc. but I again get the same error message. I am using -
opencv 3.0.0
python 2.7.10
ubuntu 15.10
here's my code--
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
fgbg = cv2.createBackgroundSubtractorMOG(detectShadows=True)
while(1):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
cv2.imshow('frame', fgmask)
k = cv2.waitKey(0)
if(k == 27):
break
cap.release()
cv2.destroyAllWindows()
Got my question solved. What i did , I opened python command line and wrote dir(cv2) and it listed me all the functions I can call and there I found BackgroundSubtractorMOG and it worked!
Case 1: This code runs fine:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('lena.png', 0)
imgR = cv2.imread('lena.png', 0)
stereo = cv2.StereoBM(0, ndisparities=16, SADWindowSize=15)
stereo.compute(imgL, imgR)
Case 2: But this fails on the last line:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('lena.png', 1)
imgR = img[:,:,2]
imgL = img[:,:,1]
stereo = cv2.StereoBM(0, ndisparities=16, SADWindowSize=15)
stereo.compute(imgL, imgR)
The error says:
error: /build/buildd/opencv-2.3.1/modules/calib3d/src/stereobm.cpp:802: error: (-211) SADWindowSize must be odd, be within 5..255 and be not larger than image width or height in function findStereoCorrespondenceBM
The really strange thing is that it fails with the same message even if I put following two lines right in front of the imgR = and imgL = starting lines, i.e.:
img = cv2.imread('lena.png', 1)
img[:,:,2] = cv2.imread('lena.png', 0)
img[:,:,1] = cv2.imread('lena.png', 0)
imgR = img[:,:,2]
imgL = img[:,:,1]
I'm still quite new to Python so maybe its a misunderstanding: Can somebody explain why case 1 works but case 2 gives me an error?
It looks like it's a bug in old OpenCV version.
Part of OpenCV 2.4.4 changelog:
Numerous bug fixes, and optimizations, including in: blendLinear, square samples, erode/dilate, Canny, convolution fixes with AMD FFT library, mean shift filtering, Stereo BM.
Most likely using 2.4.4 or any higher version (I suggest that you use the latest stable release) will solve your problem.