Tiling a small image into a bigger one with OpenCV - c++

Let's say I have an image of 200x200 pixels. I'd like a 800x800 pixels version where I basically duplicate the 200x200 image and fill the 800x800 image with(Tile the smaller image into the bigger one).
How would you go to do that in openCV? It seems straight-forward, but I don't know how to either create another cv::Mat with the same type as the pattern, but with a bigger size(Canvas size) or if it's possible to take the original 200x200 pixels image and increase it's rows and cols then simply use a loop to paste the corner unto the rest of the image.
I'm using openCV 2.3 btw. I've done quite some processing on images with fixed dimensions, but I'm kind of clueless when it comes to increasing the dimensions of the matrix.

You can use repeat function:
def tile_image(tile, height, width):
x_count = int(width / tile.shape[0]) + 1
y_count = int(height / tile.shape[1]) + 1
tiled = np.tile(tile, (y_count, x_count, 1))
return tiled[0:height, 0:width]

FYI- The blog in #karlphillip's response, the basic idea is to use cvSetImageROI and cvResetImageROI. Both are C APIs.
In later versions, v2.4 and 3.x for example, one can define a Rect with the desired location and dimensions and refer to the desired section as img(rect).
Example in C API (functional style):
cvSetImageROI(new_img, cvRect(tlx, tly, width, height);
cvCopy(old_img, new_img);
cvResetImageROI(new_img);
In the C++ API using classes:
Mat roi(new_img, Rect(tlx, tly, width, height));
roi = old_img; // or old_img.clone()
Another way in C++ (copies over the image):
old_img.copyTo(new_img(Rect(tlx, tly, width, height)))

Related

OpenCV - Removal of noise in image

I have an image here with a table.. In the column on the right the background is filled with noise
How to detect the areas with noise? I only want to apply some kind of filter on the parts with noise because I need to do OCR on it and any kind of filter will reduce the overall recognition
And what kind of filter is the best to remove the background noise in the image?
As said I need to do OCR on the image
I tried some filters/operations in OpenCV and it seems to work pretty well.
Step 1: Dilate the image -
kernel = np.ones((5, 5), np.uint8)
cv2.dilate(img, kernel, iterations = 1)
As you see, the noise is gone but the characters are very light, so I eroded the image.
Step 2: Erode the image -
kernel = np.ones((5, 5), np.uint8)
cv2.erode(img, kernel, iterations = 1)
As you can see, the noise is gone however some characters on the other columns are broken. I would recommend running these operations on the noisy column only. You might want to use HoughLines to find the last column. Then you can extract that column only, run dilation + erosion and replace this with the corresponding column in the original image.
Additionally, dilation + erosion is actually an operation called closing. This you could call directly using -
cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
As #Ermlg suggested, medianBlur with a kernel of 3 also works wonderfully.
cv2.medianBlur(img, 3)
Alternative Step
As you can see all these filters work but it is better if you implement these filters only in the part where the noise is. To do that, use the following:
edges = cv2.Canny(img, 50, 150, apertureSize = 3) // img is gray here
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, 1000, 50) // last two arguments are minimum line length and max gap between two lines respectively.
for line in lines:
for x1, y1, x2, y2 in line:
print x1, y1
// This gives the start coordinates for all the lines. You should take the x value which is between (0.75 * w, w) where w is the width of the entire image. This will give you essentially **(x1, y1) = (1896, 766)**
Then, you can extract this part only like :
extract = img[y1:h, x1:w] // w, h are width and height of the image
Then, implement the filter (median or closing) in this image. After removing the noise, you need to put this filtered image in place of the blurred part in the original image.
image[y1:h, x1:w] = median
This is straightforward in C++ :
extract.copyTo(img, new Rect(x1, y1, w - x1, h - y1))
Final Result with alternate method
Hope it helps!
My solution is based on thresholding to get the resulted image in 4 steps.
Read image by OpenCV 3.2.0.
Apply GaussianBlur() to smooth image especially the region in gray color.
Mask the image to change text to white and the rest to black.
Invert the masked image to black text in white.
The code is in Python 2.7. It can be changed to C++ easily.
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# read Danish doc image
img = cv2.imread('./imagesStackoverflow/danish_invoice.png')
# apply GaussianBlur to smooth image
blur = cv2.GaussianBlur(img,(5,3), 1)
# threshhold gray region to white (255,255, 255) and sets the rest to black(0,0,0)
mask=cv2.inRange(blur,(0,0,0),(150,150,150))
# invert the image to have text black-in-white
res = 255 - mask
plt.figure(1)
plt.subplot(121), plt.imshow(img[:,:,::-1]), plt.title('original')
plt.subplot(122), plt.imshow(blur, cmap='gray'), plt.title('blurred')
plt.figure(2)
plt.subplot(121), plt.imshow(mask, cmap='gray'), plt.title('masked')
plt.subplot(122), plt.imshow(res, cmap='gray'), plt.title('result')
plt.show()
The following is the plotted images by the code for reference.
Here is the result image at 2197 x 3218 pixels.
As I know the median filter is the best solution to reduce noise. I would recommend to use median filter with 3x3 window. See function cv::medianBlur().
But be careful when use any noise filtration simultaneously with OCR. Its can lead to decreasing of recognition accuracy.
Also I would recommend to try using pair of functions (cv::erode() and cv::dilate()). But I'm not shure that it will best solution then cv::medianBlur() with window 3x3.
I would go with median blur (probably 5*5 kernel).
if you are planning to apply OCR the image. I would advise you to the following:
Filter the image using Median Filter.
Find contours in the filtered image, you will get only text contours (Call them F).
Find contours in the original image (Call them O).
isolate all contours in O that have intersection with any contour in F.
Faster solution:
Find contours in the original image.
Filter them based on size.
Blur (3x3 box)
Threshold at 127
Result:
If you are very worried of removing pixels that could hurt your OCR detection. Without adding artefacts ea be as pure to the original as possible. Then you should create a blob filter. And delete any blobs that are smaller then n pixels or so.
Not going to write code, but i know this works great as i use this myself, though i dont use openCV (i wrote my own multithreaded blobfilter out of speed reasons). And sorry but i cannot share my code here. Just describing how to do it.
If processing time is not an issue, a very effective method in this case would be to compute all black connected components, and remove those smaller than a few pixels. It would remove all the noisy dots (apart those touching a valid component), but preserve all characters and the document structure (lines and so on).
The function to use would be connectedComponentWithStats (before you probably need to produce the negative image, the threshold function with THRESH_BINARY_INV would work in this case), drawing white rectangles where small connected components where found.
In fact, this method could be used to find characters, defined as connected components of a given minimum and maximum size, and with aspect ratio in a given range.
I had already faced the same issue and got the best solution.
Convert source image to grayscale image and apply fastNlMeanDenoising function and then apply threshold.
Like this -
fastNlMeansDenoising(gray,dst,3.0,21,7);
threshold(dst,finaldst,150,255,THRESH_BINARY);
ALSO use can adjust threshold accorsing to your background noise image.
eg- threshold(dst,finaldst,200,255,THRESH_BINARY);
NOTE - If your column lines got removed...You can take a mask of column lines from source image and can apply to the denoised resulted image using BITWISE operations like AND,OR,XOR.
Try thresholding the image like this. Make sure your src is in grayscale. This method will only retain the pixels which are between 150 and 255 intensity.
threshold(src, output, 150, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
You might want to invert the image as you are trying to negate the gray pixels. After the operation, invert it again to get your desired result.

Image resize vs Image pyramid for generating ROI for pedestrian detection - OpenCV

For a OpenCV driving assistance application I want to generate ROIs as candidates for faster HoG classification of pedestrians. I'm running this on GPU. I do not want to use detectMultiscale function as it scan all over the image (including sky). Since the features are not scalable, which of the following functions I should use for resizing the images for generating the ROIs?
gpu::resize(const GpuMat& src, GpuMat& dst, Size dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR, Stream& stream=Stream::Null()) or
Image pyramids cv2.pyrUp(), cv2.pyrDown()
I couldn't find image pyramids in OpenCV GPU library(2.4.9).
Can anyone please suggest?
Thanks
First, you could directly set a ROI by using the cvRect (opencv rectangle) function to create the ROI image/submatrix, like that:
Mat image = imread("");
Rect region_of_interest = Rect(x, y, w, h);
Mat image_roi = image(region_of_interest);
But if you want to generate lowsamples smaller (less lines and columns), there are some differences between pyramids and resize:
-Pyramids are a kind of filter done by a convolution of all image with a gaussian convolution matrix, and after that it downsamples the image by rejecting even rows and columns.
-The resize function, do a geometric transformation and you could change the method to interpolate the pixel values.
in pratice: pyramids are a quick way to downsample by 4 sub-images; resize are more generic and could be used for upsampling too.

Opencv Resize a stitch

I'm doing a mosaic from a video in Opencv. I'm using this example for stitching the frames of the videos: http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html. At the end I'm doing this for merging the new frame with the stitch created at the passed iteration:
Mat H = findHomography(obj, scene, CV_RANSAC);
static Mat rImg;
warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);//(vImg[0], rImg, H, Size(vImg[0].cols * 2, vImg[0].rows * 2), CV_INTER_LINEAR);
static Mat final_img(Size(rImg.cols*2, rImg.rows*2), CV_8UC3);
static Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
rImg.copyTo(roi2);
vImg[1].copyTo(roi1);
imwrite("stitch.jpg", final_img);
vImg[0] = final_img;
So here's my problem: obviously the stitch becomes larger at each iteration, so how can I resize it to make it fit in the final_img image?
EDIT
Sorry but I had to remove images
For the second question, what you observe is an error in the homography that was estimated. This may come either from:
drift (if you chain homographies along the sequence), ie, small errors that accumulate and become large after dozens of frames
or (more likely) because your reference image is too old with respect to your new image, and they exhibit too few matching points to give an accurate homography, but yet enough to find one that passes the quality test inside cv::findHomography().
For your first question, you need to add some code that keeps track of the current bounds of the stitched image in a fixed coordinate frame.
I would suggest to choose the coordinates linked with the first image.
Then, when you stitch a new image, what you do really is to project this image onto this coordinate frame.
You can compute first for example the projected coordinates of the 4 corners of the incoming frame, test if they fit into the current stitching result, copy it to a new (bigger) image if necessary, then proceed with stitching the new image.

How to thin an image borders with specific pixel size? OpenCV

I'm trying to thin an image by making the border pixels of size 16x24 becoming 0. I'm not trying to get the skeletal image, I'm just trying to reduce the size of the white area. Any methods that I could use? Enlighten me please.
This is the sample image that i'm trying to thin. It is made of 16x24 white blocks
EDIT
I tried to use this
cv::Mat img=cv::imread("image.bmp", CV_LOAD_IMAGE_GRAYSCALE);//image is in binary
cv::Mat mask = img > 0;
Mat kernel = Mat::ones( 16, 24, CV_8U );
erode(mask,mask,kernel);
But the result i got was this
which is not exactly what i wanted. I want to maintain the exact same shape with just 16x24 pixels of white shaved off from the border. Any idea what went wrong?
You want to Erode your image.
Another Description
Late answer, but you should erode your image using a kernel which is twice the size you want to get rid of plus one, like:
Mat kernel = Mat::ones( 24*2+1, 16*2+1, CV_8U );
Notice I changed the places of the height and width of the block, I only know opencv from Python, but I am pretty sure the order is the same as in Python.

Automatically detect and crop ROI in OpenCV

I have these images to compare with each other. However, there are too many blacks that I think I can crop out to make comparison more effective.
What I want to do is crop Mars. Rectangle or round whichever may yield better results when compared. I was worrying that if the cropping would result to images of different sizes, comparison wouldn't work out as well as expected? Ideas how to do it and sample codes if possible? Thanks in advance
UPDATE: Tried using cvHoughCircles() it won't detect the planet :/
Try to use color detection. You need to find all the colors except black. Here and here are nice explanations of this method.
You can convert these images to gray scale images using cvCvtColor(img,imgGrayScale,CV_BGR2GRAY)
Then threshold them using cvThreshold(imgGrayScale,imgThresh,x,255,CV_THRESH_BINARY). Here, you have to find a good value for x(I think x=50 is ok).
CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments));
cvMoments(imgThresh, moments, 1);
double moment10 = cvGetSpatialMoment(moments, 1, 0);
double moment01 = cvGetSpatialMoment(moments, 0, 1);
double area = cvGetCentralMoment(moments, 0, 0);
int x = moment10/area;
int y = moment01/area;
Now you know the (x.y) coordinate of the blob. Then you can crop the image using cvSetImageROI(imgThresh, cvRect(x-10, y-10, x+10, y+10)). Here I have assumed that the radius of this blob is less than 10 pixel.
All cropped images are of same size and the white blob (planet) is exactly at the middle of the image.
Then you can compare images using normalized cross-correlation.
There's no fundamental reason why a histogram would fail here. I would convert the image to greyscale before doing a histogram, just to make the numbers more manageable. A color image has a 3D histogram; the Red, Green, Blue and Greyscale histograms are all 1D projections of that 3D histogram.