I have an image. I resized it i obtain another with a size greater than the first .
resize(roi,zoom,Size(2274,70),(0.0),(0.0),3);
i also test all the method of interpplation but it don't give a good result.
CV_INTER_NN (default)
CV_INTER_LINEAR
CV_INTER_CUBIC
CV_INTER_AREA
The image contain text when i zoomed it it becomes so fuzzy and i can not recognize the text.
I ask for an algorithm or method to make the image more clear
Thanks for the help
Related
Is there a way to produce a glare on an image? Given an image with an object, I want to produce a glare on a portion of an image. If I have an image that is 256x256, I want to produce glare on the first 64x64 patch. Is there a function in opencv I can use for that? If not, what is a good way to go about this problem?
I think that this example does what you need. Each time it saves a face, it gives a flash in the part of the screen where the face was recognised. So, the glares changes every time of place and size.
You can found it here:
https://github.com/MasteringOpenCV/code/tree/master/Chapter8_FaceRecognition
Seek this part in the main.cpp:
// Make a white flash on the face, so the user knows a photo has been taken.
Mat displayedFaceRegion = displayedFrame(faceRect);
displayedFaceRegion += CV_RGB(90,90,90);
In my previous question here's the link. According to the answer I have obtained the desired image which is white flood filled.
Now after applying the morphological operation of erosion on the white flood filled image, I get the new masked image.
Your answer helped a lot. Now what I am trying to do is that I am multiplying the new masked image with the original grayscaled image in order to get the veins pattern. But it gives me the same image as result which I get after performing erosion on the white flood filled image. After completing this step I have to apply the Laplacian function to get the veins pattern. I am attaching the original image and the result image that I want. I hope you will look into the matter.
Original Image.
Result Image.
If I am right in understanding you, you only want to extract the veins from the grayscale hand image, right? To do something like this, you would obviously multiply both of them as,
finalimg = grayimg * veinmask;
If you have done the above I think it would be more helpful to post a portion of your code so experts here might be able to point out whats wrong, also the output image that you're getting, and the one you want would also help.
I hope I understand you correctly. You have a gray scale image showing a hand (first image in your question)
You create a mask image that looks like the second image you posted.
Multiplication of both results in the mask image?
If that is the case check your values. If you work within a byte image your mask image must contain values 0 and 1, not 0 and 255 as the multiplication results for non-zero mask pixels otherwise exceed 255!
I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.
I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.
I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.