Related
I am looking for an idiomatic and efficient solution for this problem:
Let's say I have 3D Tensor where I want to represent an image with 100*100 pixels on 3 color channels,
Eigen::Tensor<int, 3> input(3,100,100);
The output I would like to get could be stored in
Eigen::Tensor<int, 4> output(3,3,100,100);
I would like to project the 3D input into the 4D output in a way that each color channel in the original tensor would have its own individual 3D tensor in the output, where each channel would contain the same values, that is
tensor(0,0,42,42) = tensor(0,1,42,42) = tensor(0,2,42,42)
tensor(0,0,12,12) = tensor(0,1,12,12) = tensor(0,2,12,12)
Illustrated on a picture:
Originally I wanted to solve this method:
Chip the individual color channels.
Broadcast the individual color channels into the size I need,
Reshape the broadcasted result into the desirable format(this is just a 3D Tensor at this point)
Concatenate the individual 3D Tensors into a big 4d one.
I have two problems with this approach.
Firstly, I just can not get the reshaping right, it always gives back a reshaped tensor with the dimensionality I want, but the coefficients get shuffled. I started to experiment with the layout of the Tensors, but it did not seem to help.
Secondly, this seems to be very tedious, I just feel like there should be a more convenient way to achieve this but I could not find any cue about that in the documentation.
I have an image here with a table.. In the column on the right the background is filled with noise
How to detect the areas with noise? I only want to apply some kind of filter on the parts with noise because I need to do OCR on it and any kind of filter will reduce the overall recognition
And what kind of filter is the best to remove the background noise in the image?
As said I need to do OCR on the image
I tried some filters/operations in OpenCV and it seems to work pretty well.
Step 1: Dilate the image -
kernel = np.ones((5, 5), np.uint8)
cv2.dilate(img, kernel, iterations = 1)
As you see, the noise is gone but the characters are very light, so I eroded the image.
Step 2: Erode the image -
kernel = np.ones((5, 5), np.uint8)
cv2.erode(img, kernel, iterations = 1)
As you can see, the noise is gone however some characters on the other columns are broken. I would recommend running these operations on the noisy column only. You might want to use HoughLines to find the last column. Then you can extract that column only, run dilation + erosion and replace this with the corresponding column in the original image.
Additionally, dilation + erosion is actually an operation called closing. This you could call directly using -
cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
As #Ermlg suggested, medianBlur with a kernel of 3 also works wonderfully.
cv2.medianBlur(img, 3)
Alternative Step
As you can see all these filters work but it is better if you implement these filters only in the part where the noise is. To do that, use the following:
edges = cv2.Canny(img, 50, 150, apertureSize = 3) // img is gray here
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, 1000, 50) // last two arguments are minimum line length and max gap between two lines respectively.
for line in lines:
for x1, y1, x2, y2 in line:
print x1, y1
// This gives the start coordinates for all the lines. You should take the x value which is between (0.75 * w, w) where w is the width of the entire image. This will give you essentially **(x1, y1) = (1896, 766)**
Then, you can extract this part only like :
extract = img[y1:h, x1:w] // w, h are width and height of the image
Then, implement the filter (median or closing) in this image. After removing the noise, you need to put this filtered image in place of the blurred part in the original image.
image[y1:h, x1:w] = median
This is straightforward in C++ :
extract.copyTo(img, new Rect(x1, y1, w - x1, h - y1))
Final Result with alternate method
Hope it helps!
My solution is based on thresholding to get the resulted image in 4 steps.
Read image by OpenCV 3.2.0.
Apply GaussianBlur() to smooth image especially the region in gray color.
Mask the image to change text to white and the rest to black.
Invert the masked image to black text in white.
The code is in Python 2.7. It can be changed to C++ easily.
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# read Danish doc image
img = cv2.imread('./imagesStackoverflow/danish_invoice.png')
# apply GaussianBlur to smooth image
blur = cv2.GaussianBlur(img,(5,3), 1)
# threshhold gray region to white (255,255, 255) and sets the rest to black(0,0,0)
mask=cv2.inRange(blur,(0,0,0),(150,150,150))
# invert the image to have text black-in-white
res = 255 - mask
plt.figure(1)
plt.subplot(121), plt.imshow(img[:,:,::-1]), plt.title('original')
plt.subplot(122), plt.imshow(blur, cmap='gray'), plt.title('blurred')
plt.figure(2)
plt.subplot(121), plt.imshow(mask, cmap='gray'), plt.title('masked')
plt.subplot(122), plt.imshow(res, cmap='gray'), plt.title('result')
plt.show()
The following is the plotted images by the code for reference.
Here is the result image at 2197 x 3218 pixels.
As I know the median filter is the best solution to reduce noise. I would recommend to use median filter with 3x3 window. See function cv::medianBlur().
But be careful when use any noise filtration simultaneously with OCR. Its can lead to decreasing of recognition accuracy.
Also I would recommend to try using pair of functions (cv::erode() and cv::dilate()). But I'm not shure that it will best solution then cv::medianBlur() with window 3x3.
I would go with median blur (probably 5*5 kernel).
if you are planning to apply OCR the image. I would advise you to the following:
Filter the image using Median Filter.
Find contours in the filtered image, you will get only text contours (Call them F).
Find contours in the original image (Call them O).
isolate all contours in O that have intersection with any contour in F.
Faster solution:
Find contours in the original image.
Filter them based on size.
Blur (3x3 box)
Threshold at 127
Result:
If you are very worried of removing pixels that could hurt your OCR detection. Without adding artefacts ea be as pure to the original as possible. Then you should create a blob filter. And delete any blobs that are smaller then n pixels or so.
Not going to write code, but i know this works great as i use this myself, though i dont use openCV (i wrote my own multithreaded blobfilter out of speed reasons). And sorry but i cannot share my code here. Just describing how to do it.
If processing time is not an issue, a very effective method in this case would be to compute all black connected components, and remove those smaller than a few pixels. It would remove all the noisy dots (apart those touching a valid component), but preserve all characters and the document structure (lines and so on).
The function to use would be connectedComponentWithStats (before you probably need to produce the negative image, the threshold function with THRESH_BINARY_INV would work in this case), drawing white rectangles where small connected components where found.
In fact, this method could be used to find characters, defined as connected components of a given minimum and maximum size, and with aspect ratio in a given range.
I had already faced the same issue and got the best solution.
Convert source image to grayscale image and apply fastNlMeanDenoising function and then apply threshold.
Like this -
fastNlMeansDenoising(gray,dst,3.0,21,7);
threshold(dst,finaldst,150,255,THRESH_BINARY);
ALSO use can adjust threshold accorsing to your background noise image.
eg- threshold(dst,finaldst,200,255,THRESH_BINARY);
NOTE - If your column lines got removed...You can take a mask of column lines from source image and can apply to the denoised resulted image using BITWISE operations like AND,OR,XOR.
Try thresholding the image like this. Make sure your src is in grayscale. This method will only retain the pixels which are between 150 and 255 intensity.
threshold(src, output, 150, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
You might want to invert the image as you are trying to negate the gray pixels. After the operation, invert it again to get your desired result.
I have this matlab code to display image object after do super spectrogram (stft, couple plca...)
t = z2 *stft_options.hop/stft_options.sr;
f = stft_options.sr*[0:size(spec_t,1)-1]/stft_options.N/1000;
max_val = max(max(db(abs(spec_t))));
imagesc(t, f, db(abs(spec_t)),[max_val-60 max_val]);
And get this result:
I was porting to C++ successfully by using Armadillo lib and get the mat results:
mat f,t,spec_t;
The problem is that I don't have any idea for converting bitmap like imagesc in matlab.
I searched and found this answer, but seems it doesn't work in my case because:
I use a double matrix instead of integer matrix, which can't be mark as bitmap color
The imagesc method take 4 parameters, which has the bounds with vectors x and y
The imagesc method also support scale ( I actually don't know how it work)
Does anyone have any suggestion?
Update: Here is the result of save method in Armadillo. It doesn't look like spectrogram image above. Do I miss something?
spec_t.save("spec_t.png", pgm_binary);
Update 2: save spectrogram with db and abs
mat spec_t_mag = db(abs(spec_t)); // where db method: m = 10 * log10(m);
mag_spec_t.save("mag_spec_t.png", pgm_binary);
And the result:
Armadillo is a linear algebra package, AFAIK it does not provide graphics routines. If you use something like opencv for those then it is really simple.
See this link about opencv's imshow(), and this link on how to use it in a program.
Note that opencv (like most other libraries) uses row-major indexing (x,y) and Armadillo uses column-major (row,column) indexing, as explained here.
For scaling, it's safest to convert to unsigned char yourself. In Armadillo that would be something like:
arma::Mat<unsigned char> mat2=255*(mat-mat.min())/(mat.max()-mat.min());
The t and f variables are for setting the axes, they are not part of the bitmap.
For just writing an image you can use Armadillo. Here is a description on how to write portable grey map (PGM) and portable pixel map (PPM) images. PGM export is only possible for 2D matrices, PPM export only for 3D matrices, where the 3rd dimension (size 3) are the channels for red, green and blue.
The reason your matlab figure looks prettier is because it has a colour map: a mapping of every value 0..255 to a vector [R, G, B] specifying the relative intensity of red, green and blue. A photo has an RGB value at every point:
colormap(gray);
x=imread('onion.png');
imagesc(x);
size(x)
That's the 3rd dimension of the image.
Your matrix is a 2d image, so the most natural way to show it is as grey levels (as happened for your spectrum).
x=mean(x,3);
imagesc(x);
This means that the R, G and B intensities jointly increase with the values in mat. You can put a colour map of different R,G,B combinations in a variable and use that instead, i.e. y=colormap('hot');colormap(y);. The variable y shows the R,G,B combinations for the (rescaled) image values.
It's also possible to make your own colour map (in matlab you can specify 64 R, G, and B combinations with values between 0 and 1):
z[63:-1:0; 1:2:63 63:-2:0; 0:63]'/63
colormap(z);
Now for increasing image values, red intensities decrease (starting from the maximum level), green intensities quickly increase then decrease, and blue values increase from minuimum to maximum.
Because PPM appears (I don't know the format) not to support colour maps, you need to specify the R,G,B values in a 3D array. For a colour order similar to z you would neet to make a Cube<unsigned char> c(ysize, xsize, 3) and then for every pixel y, x in mat2, do:
c(y,x,0) = 255-mat2(y,x);
c(y,x,1) = 255-abs(255-2*mat2(y,x));
x(y,x,2) = mat2(y,x)
or something very similar.
You may use SigPack, a signal processing library on top of Armadillo. It has spectrogram support and you may save the plot to a lot of different formats (png, ps, eps, tex, pdf, svg, emf, gif). SigPack uses Gnuplot for the plotting.
Please anyone help me to resolve my issue. I am working on image processing based project and I stuck at a point. I got this image after some processing and for further processing i need to crop or detect only deer and remove other portion of image.
This is my Initial image:
And my result should be something like this:
It will be more better if I get only a single biggest blob in the image and save it as a image.
It looks like the deer in your image is pretty much connected and closed. What we can do is use regionprops to find all of the bounding boxes in your image. Once we do this, we can find the bounding box that gives the largest area, which will presumably be your deer. Once we find this bounding box, we can crop your image and focus on the deer entirely. As such, assuming your image is stored in im, do this:
im = im2bw(im); %// Just in case...
bound = regionprops(im, 'BoundingBox', 'Area');
%// Obtaining Bounding Box co-ordinates
bboxes = reshape([bound.BoundingBox], 4, []).';
%// Obtain the areas within each bounding box
areas = [bound.Area].';
%// Figure out which bounding box has the maximum area
[~,maxInd] = max(areas);
%// Obtain this bounding box
%// Ensure all floating point is removed
finalBB = floor(bboxes(maxInd,:));
%// Crop the image
out = im(finalBB(2):finalBB(2)+finalBB(4), finalBB(1):finalBB(1)+finalBB(3));
%// Show the images
figure;
subplot(1,2,1);
imshow(im);
subplot(1,2,2);
imshow(out);
Let's go through this code slowly. We first convert the image to binary just in case. Your image may be an RGB image with intensities of 0 or 255... I can't say for sure, so let's just do a binary conversion just in case. We then call regionprops with the BoundingBox property to find every bounding box of every unique object in the image. This bounding box is the minimum spanning bounding box to ensure that the object is contained within it. Each bounding box is a 4 element array that is structured like so:
[x y w h]
Each bounding box is delineated by its origin at the top left corner of the box, denoted as x and y, where x is the horizontal co-ordinate while y is the vertical co-ordinate. x increases positively from left to right, while y increases positively from top to bottom. w,h are the width and height of the bounding box. Because these points are in a structure, I extract them and place them into a single 1D vector, then reshape it so that it becomes a M x 4 matrix. Bear in mind that this is the only way that I know of that can extract values in arrays for each structuring element efficiently without any for loops. This will facilitate our searching to be quicker. I have also done the same for the Area property. For each bounding box we have in our image, we also have the attribute of the total area encapsulated within the bounding box.
Thanks to #Shai for the spot, we can't simply use the bounding box co-ordinates to determine whether or not something has the biggest area within it as we could have a thin diagonal line that could drive the bounding box co-ordinates to be higher. As such, we also need to rely on the total area that the object takes up within the bounding box as well. Simply put, it's just the sum of all of the pixels that are contained within the object.
Therefore, we search the entire area vector that we have created to see which has the maximum area. This corresponds to your deer. Once we find this location, extract the bounding box locations, then use this to crop the image. Bear in mind that the bounding box values may have floating point numbers. As the image co-ordinates are in integer based, we need to remove these floating point values before we decide to crop. I decided to use floor. I then write code that displays the original image, with the cropped result.
Bear in mind that this will only work if there is just one object in the image. If you want to find multiple objects, check bwboundaries in MATLAB. Otherwise, I believe this should get you started.
Just for completeness, we get the following result:
While object detection is a very general CV task, you can start with something simple if the assumptions are strong enough and you can guarantee that the input images will contain a single prominent white blob well described by a bounding box.
One very simple idea is to subdivide the picture in 3x3=9 patches, calculate the statistics for each patch and compute some objective function. In the most simple case you just do a grid search over various partitions and select that with the highest objective metric. Here's an illustration:
If every line is a parameter x_1, x_2, y_1 and y_2, then you want to optimize
either by
grid search (try all x_i, y_i in some quantization steps)
genetic-algorithm-like random search
gradient descent (move every parameter in that direction that optimizes the target function)
The target function F can be define over statistics of the patches, e.g. like this
F(9 patches) {
brightest_patch = max(patches)
others = patches \ brightest_patch
score = brightness(brightest_patch) - 1/8 * brightness(others)
return score
}
or anything else that incorporates relevant statistics of the patches as well as their size. This also allows to incorporate a "prior knowledge": if you expect the blob to appear in the middle of the image, then you can define a "regularization" term that will penalize F if the parameters x_i and y_i deviate from the expected position too much.
Thanks to all who answer and comment on my Question. With your help I got my exact solution. I am posting my final code and result for others.
img = im2bw(imread('deer.png'));
[L, num] = bwlabel(img, 4);
%%// Get biggest blob or object
count_pixels_per_obj = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(count_pixels_per_obj);
biggest_blob = (L==ind);
%%// crop only deer
bound = regionprops(biggest_blob, 'BoundingBox');
%// Obtaining Bounding Box co-ordinates
bboxes = reshape([bound.BoundingBox], 4, []).';
%// Obtain this bounding box
%// Ensure all floating point is removed
finalBB = floor(bboxes);
out = biggest_blob(finalBB(2):finalBB(2)+finalBB(4),finalBB(1):finalBB(1)+finalBB(3));
%%// Show images
figure;
imshow(out);
can anyone tell me the correct method to use the getOutputValue function in the following link? Also, how does the author get the 2nd and 3rd image from the code.
http://www.codeproject.com/Articles/385658/Multidimensional-Discrete-Wavelet-Transform
Thanks
Okay, usage:
I haven't tried it yet, but from what I get you simply call getOutputValue() to get one result. The parameter is a vector containing the "coordinates" (based on the number of dimensions in your input).
Images:
In this example, the author obviously used the image data as the discrete values, e.g. a black pixel would be 0 and a white pixel would be 255 with all other shades of grey being inbetween (default 8 bit grayscale image).
He then used the output signal/result to recreate a image (i.e. interpret the values as pixels once again).