When processing an image with text in OpenCV, my opening operation does not result in proper output data. The issue is quite similar to the one described in this article:
http://www.cpe.eng.cmu.ac.th/wp-content/uploads/CPE752_06part2.pdf
What I can see, people suggest to use reconstruction operations. Is there any build-in mechanism in OpenCV or some known library/code that implements this?
Here's my Python3 implementation in analogy to MatLab's imreconstruct algorithm:
import cv2
import numpy as np
def imreconstruct(marker: np.ndarray, mask: np.ndarray, radius: int = 1):
"""Iteratively expand the markers white keeping them limited by the mask during each iteration.
:param marker: Grayscale image where initial seed is white on black background.
:param mask: Grayscale mask where the valid area is white on black background.
:param radius Can be increased to improve expansion speed while causing decreased isolation from nearby areas.
:returns A copy of the last expansion.
Written By Semnodime.
"""
kernel = np.ones(shape=(radius * 2 + 1,) * 2, dtype=np.uint8)
while True:
expanded = cv2.dilate(src=marker, kernel=kernel)
cv2.bitwise_and(src1=expanded, src2=mask, dst=expanded)
# Termination criterion: Expansion didn't change the image at all
if (marker == expanded).all():
return expanded
marker = expanded
This answer arrives late, but here is the basic algorithm for under-reconstruction:
Inputs are two images: ImReference and ImMarker, with marker <= reference
Intermediate image: ImRec
Output image: ImResult
Copy ImMarker into ImRec
copy ImRec into ImResult
ImDilated = Dilation(ImResult)
ImRec = Minimum(ImDilated, ImReference)
If ImRec != ImResult then return to step 5.
It's not the most optimal algorithm, but it uses only basic operations.
Related
I have the following sample of handwriting taken with three different writing instruments:
Looking at the writing, I can tell that there is a distinct difference between the first two and the last one. My goal is to determine an approximation of the stroke thickness for each letter, allowing me to group them based on being thin or thick.
So far, I have tried looking into stroke width transform, but I have struggled to translate it to my example.
I am able to preprocess the image such that I am just left with just the contours of the test in question. For example, here is thick from the last line:
I suggest detecting contours with cv::findContours as you are doing and then compare bounding rectangle area and contour area. The thicker writing the greater coefficent (contourArea/boundingRectArea) will be.
This approach will help you. This will calcuate the stroke width.
from skimage.feature import peak_local_max
from skimage import img_as_float
def adaptive_thresholding(image):
output_image = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,21,2)
return output_image
def stroke_width(image):
dist = cv2.distanceTransform(cv2.subtract(255,image), cv2.DIST_L2, 5)
im = img_as_float(dist)
coordinates = peak_local_max(im, min_distance=15)
pixel_strength = []
for element in coordinates:
x = element[0]
y = element[1]
pixel_strength.append(np.asarray(dist)[x,y])
mean_pixel_strength = np.asarray(pixel_strength).mean()
return mean_pixel_strength
image = cv2.imread('Small3.JPG', 0)
process_image = adaptive_thresholding(image)
stroke_width(process_image)
A python implementation for this might go something like this, using Stroke Width Transform implementation of SWTloc.
Full Disclosure: I am the author of this library.
EDIT : Post v2.0.0
Transforming The Image
import swtloc as swt
imgpath = 'images/path_to_image.jpeg'
swtl = swt.SWTLocalizer(image_paths=imgpath)
swtImgObj = swtl.swtimages[0]
# Perform SWT Transformation with numba engine
swt_mat = swtImgObj.transformImage(auto_canny_sigma=1.0, gaussian_blurr=False,
minimum_stroke_width=3, maximum_stroke_width=50,
maximum_angle_deviation=np.pi/3)
Localize Letters
localized_letters = swtImgObj.localizeLetters()
Plot Histogram of Each Letters Strokes Widths
import seaborn as sns
import matplotlib.pyplot as plt
all_sws = []
for letter_label, letter in localized_letters.items():
all_sws.append(letter.stroke_widths_mean)
sns.displot(all_sws, bins=31)
From the distribution plot, it can be inferred that there might be three fontsize of the text available in the image - [3, 15, 27]
I am trying to train a Custom Object Detector by using the HOG+SVM method on OpenCV.
I have managed to extract HOG features from my positive and negative samples using the below line of code:
import cv2
hog = cv2.HOGDescriptor()
def poshoggify():
for i in range(1,20):
image = cv2.imread("/Users/munirmalik/cvprojek/cod/pos/" + str(i)+ ".jpg")
(winW, winH) = (500, 500)
for resized in pyramid(image, scale=1.5):
# loop over the sliding window for each layer of the pyramid
for (x, y, window) in sliding_window(resized, stepSize=32, windowSize=(winW, winH)):
# if the window does not meet our desired window size, ignore it
if window.shape[0] != winH or window.shape[1] != winW:
continue
img_pos = hog.compute(image)
np.savetxt('posdata.txt',img_pos)
return img_pos
And the equivalent function for the negative samples.
How do I format the data in such a way that the SVM knows which is positive and which is negative?
Furthermore, how do I translate this training to the "test" of detecting the desired objects through my webcam?
How do I format the data in such a way that the SVM knows which is positive and which is negative?
You would now create another list called labels which would store the class value associated with a corresponding image. For example, if you have a training set of features that looks like this:
features = [pos_features1, pos_features2, neg_features1, neg_features2, neg_features3, neg_features4]
you would have a corresponding labels class like
labels = [1, 1, 0, 0, 0, 0]
You would then feed this to a classifier like so:
clf=LinearSVC(C=1.0, class_weight='balanced')
clf.fit(features,labels)
Furthermore, how do I translate this training to the "test" of detecting the desired objects through my webcam?
Before training, you should have split your labelled dataset (groundtruth) into training and testing datasets. You can do this using skilearns KFold module.
i have a picture from a laser line and i would like to extract that line out of the image.
As the laser line is red, i take the red channel of the image and then searching for the highest intensity in every row:
The problem now is, that there are also some points which doesnt belong to the laser line (if you zoom into the second picture, you can see these points).
Does anyone have an idea for the next steps (to remove the single points and also to extract the lines)?
That was another approach to detect the line:
First i blurred out that "black-white" line with a kernel, then i thinned(skeleton) that blurred line to a thin line, then i applied an OpenCV function to detect the line.. the result is in the below image:
NEW:
Now i have another harder situation.
I have to extract a green laser light.
The problem here is that the colour range of the laser line is wider and changing.
On some parts of the laser line the pixel just have high green component, while on other parts the pixel have high blue component as well.
Getting the highest value in every row will always output a value, instead of ignoring when the value isn't high enough. Consider using a threshold too, so that you can discard ones that aren't high enough.
However, that's not a very efficient way to do this at all. A much better and easier solution would be to use the OpenCV function inRange(); define a lower and upper bound for the red color in all three channels, and this will return a binary image with white pixels where the image intensity is within that BGR range.
This is in python but it does the job, should be easy to see how to use the function:
import cv2
import numpy as np
img = cv2.imread('image.png')
lowerb = np.array([0, 0, 120])
upperb = np.array([100, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
cv2.imshow('red', red_line)
cv2.waitKey(0)
This produces the output:
This could be further processed by finding contours or other methods to turn the points into a nice curve.
I'm really sorry for the short answer without any code, but I suggest you take contours and process them.
I dont know exact what you need, so here are two approaches for you:
just collect as much as possible contours on single line (use centers and try find straight line with smallest mean)
as first way, but trying heuristically combine separated lines.... it's much harder, but this may give you almost full laser line from image.
--
Some example for yours picture:
import cv2
import numpy as np
import math
img = cv2.imread('image.png')
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# filtering red area of hue
redHueArea = 15
redRange = ((hsv[:, :, 0] + 360 + redHueArea) % 360)
hsv[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
# filtering by saturation
hsv[np.where(hsv[:, :, 1] < 95)] = [0, 0, 0]
# convert to rgb
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
# select only red grayscaled channel with low threshold
gray = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
gray = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY)[1]
# contours processing
(_, contours, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, 1)
for c in contours:
area = cv2.contourArea(c)
if area < 8: continue
epsilon = 0.1 * cv2.arcLength(c, True) # tricky smoothing to a single line
approx = cv2.approxPolyDP(c, epsilon, True)
cv2.drawContours(img, [approx], -1, [255, 255, 255], -1)
cv2.imshow('result', img)
cv2.waitKey(0)
In your case it's work perfectly, but, as i already said, you will need to do much more work with contours.
I have constructed a regression type of neural net (NN) with dropout by Tensorflow. I would like to know if it is possible to find which hidden units are dropped from the previous layer in the output file. Therefore, we could implement the NN results by C++ or Matlab.
The following is an example of Tensorflow model. There are three hidden layer with one output layer. After the 3rd sigmoid layer, there is a dropout with probability equal to 0.9. I would like to know if it is possible to know which hidden units in the 3rd sigmoid layer are dropped.
def multilayer_perceptron(_x, _weights, _biases):
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(_x, _weights['h1']), _biases['b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
layer_3 = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, _weights['h3']), _biases['b3']))
layer_d = tf.nn.dropout(layer_3, 0.9)
return tf.matmul(layer_d, _weights['out']) + _biases['out']
Thank you very much!
There is a way to get the mask of 0 and 1, and of shape layer_3.get_shape() produced by tf.nn.dropout().
The trick is to give a name to your dropout operation:
layer_d = tf.nn.dropout(layer_3, 0.9, name='my_dropout')
Then you can get the wanted mask through the TensorFlow graph:
graph = tf.get_default_graph()
mask = graph.get_tensor_by_name('my_dropout/Floor:0')
The tensor mask will be of same shape and type as layer_d, and will only have values 0 or 1. 0 corresponds to the dropped neurons.
Simple and idiomatic solution (although possibly slightly slower than Oliver's):
# generate mask
mask = tf.nn.dropout(tf.ones_like(layer),rate)
# apply mask
dropped_layer = layer * mask
I have stitched two images together using OpenCV functions and C++. Now I am facing a problem that the final image contains a large black part.
The final image should be a rectangle containing the effective part.
My image is the following:
How can I remove the black section?
mevatron's answer is one way where amount of black region is minimised while retaining full image.
Another option is removing complete black region where you also loose some part of image, but result will be a neat looking rectangular image. Below is the Python code.
Here, you find three main corners of the image as below:
I have marked those values. (1,x2), (x1,1), (x3,y3). It is based on the assumption that your image starts from (1,1).
Code :
First steps are same as mevatron's. Blur the image to remove noise, threshold the image, then find contours.
import cv2
import numpy as np
img = cv2.imread('office.jpg')
img = cv2.resize(img,(800,400))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,3)
ret,thresh = cv2.threshold(gray,1,255,0)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
Now find the biggest contour which is your image. It is to avoid noise in case if any (Most probably there won't be any). Or you can use mevatron's method.
max_area = -1
best_cnt = None
for cnt in contours:
area = cv2.contourArea(cnt)
if area > max_area:
max_area = area
best_cnt = cnt
Now approximate the contour to remove unnecessary points in contour values found, but it preserve all corner values.
approx = cv2.approxPolyDP(best_cnt,0.01*cv2.arcLength(best_cnt,True),True)
Now we find the corners.
First, we find (x3,y3). It is farthest point. So x3*y3 will be very large. So we find products of all pair of points and select the pair with maximum product.
far = approx[np.product(approx,2).argmax()][0]
Next (1,x2). It is the point where first element is one,then second element is maximum.
ymax = approx[approx[:,:,0]==1].max()
Next (x1,1). It is the point where second element is 1, then first element is maximum.
xmax = approx[approx[:,:,1]==1].max()
Now we find the minimum values in (far.x,xmax) and (far.y, ymax)
x = min(far[0],xmax)
y = min(far[1],ymax)
If you draw a rectangle with (1,1) and (x,y), you get result as below:
So you crop the image to correct rectangular area.
img2 = img[:y,:x].copy()
Below is the result:
See, the problem is that you lose some parts of the stitched image.
You can do this with threshold, findContours, and boundingRect.
So, here is a quick script doing this with the python interface.
stitched = cv2.imread('stitched.jpg', 0)
(_, mask) = cv2.threshold(stitched, 1.0, 255.0, cv2.THRESH_BINARY);
# findContours destroys input
temp = mask.copy()
(contours, _) = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort contours by largest first (if there are more than one)
contours = sorted(contours, key=lambda contour:len(contour), reverse=True)
roi = cv2.boundingRect(contours[0])
# use the roi to select into the original 'stitched' image
stitched[roi[1]:roi[3], roi[0]:roi[2]]
Ends up looking like this:
NOTE : Sorting may not be necessary with raw imagery, but using the compressed image caused some compression artifacts to show up when using a low threshold, so that is why I post-processed with sorting.
Hope that helps!
You can use active contours (balloons/snakes) for selecting the black region accurately. A demonstration can be found here. Active contours are available in OpenCV, check cvSnakeImage.