Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am trying to crop an image. Sometimes the calculated cropped image outreach the initial size (since the cropping is calculated by the use of a distance). How is it possible instead of receiving an out of bounds error, to fill the rest with black (the same effect as in wrapAffine rotation).
Well, cropping is by definition reducing the size of an image. So if you want to potentially extend the image at the same time, and fill it with black, you could enlarge it first. Something like the following algorithm:
Take maximal bounding box of the original image and the crop rect
Create an empty (black) image of the new dimensions
Copy the original image into the black image
Crop the resulting image
You could wrap this up into a class and give it the same interface as the crop filter, then you would have a generic solution.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I would like to know how can I approach my given problem: removing petiole from a leaf, with very little to none effects to the leaf.
From research, people tried using morphological operations like top-hat to enhance and remove the petiole afterwards, but in some cases this doesn't work so well, detecting in addition peaks from the leaf (example below).
I will try also segmentation based on HSV color space, but i will very much appreciate an idea for BGR space.
From left to right (input image, contour found, morphological applied with a kernel size structuring element which depends for every leaf species)
I am using OpenCV with C++.
example problem petiole detection
As mentioned in the comments I was curious to try this out myself.
And this is what I got:
I used the distance transform, but the final solution does not appear so perfect. I have the code in python if you would like.
CODE:
dist_transform = cv2.distanceTransform(thresh1,cv2.DIST_L2,5)
ret, stalk = cv2.threshold(dist_transform,0.095*dist_transform.max(),255,0)
stalk = np.uint8(stalk)
cv2.imshow('stalk_removed.jpg',stalk)
Where thresh1 is the binary image of the leaf.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am doing background subtraction, and I obtain a binary image with foreground objects and with some noise.
I want to obtain a ROI for each object on the binary image and them analyze it to ensure that is the object that I want.
How do I segment only the areas with high pixel intensity (objects)?
One example of obtained image:
Have a look at openCv simpleBlobDetector, there are several configurable parameters to it and tons of tutorials online.
The documentation can be found here: http://docs.opencv.org/trunk/d0/d7a/classcv_1_1SimpleBlobDetector.html
Alternatively you could just convolve a white rectangle across multiple scale spaces and return the median values over each scale space.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a collection of photos that have text displayed in them. I'd like to replace that text with a pattern of my choosing. I'm using OCR to find the text so I know its position already and select it as a region of interest.
For example, given this photo:
OCR returns the coordinates where the text is:
I want to replace text to achieve this:
How do I select, remove, and replace the text using OpenCV?
My advice is image binaryzation. Since you get the coordinates where the text is, treat the binary image as a mask, the text in the binary image should be 255 in the regions you get, then you can assign other value to the text pixels.
Once you have the approximate region of interest, run an OTSU threshold routine on the area and you'll get a binary mask (hopefully, provided the image is not very noisy).
Tinker with that binary mask to your heart's content.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to detect the location of a fingertip from an image. I've been able to crop out a region in the image where it must have a fingertip, and extract the edges using Canny Edge Detector. However I'm stuck. Since my project description says I can't use the skin color for detection, I cannot find the exact contour of the finger, and will have to try to separate the fingertip with edges alone. Right now I'm thinking since the finger has a curved arch shape/letter U shape, maybe that could be used for detection. But since it has to be rotation/scale invariant, most algorithms I found so far are not up to it. Does anyone have an idea of how to do this? Thanks for anyone that responds!
This is the result I have now. I want to put a bounding box around the index fingertip, or the highest fingertip, whichever is the easiest.
You may view the tip of U as a corner, and try corner detection method such as the Foerstner Algorithm that will position of a corner with sub-pixel accuracy, and Haris corner detector which has implementation included in the feature2D class in opencv.
There is a very clear and straighforward lecture on Haris corner detector that I would like to share with you.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Is there anyone know how to map multiple images on the sphere using C++/OpenGL?
As the picture here;
images on sphere
You can split the sphere in multiple sections, each one having bound a single image.
It should be reltively easy to generate a sphere slice coordinate using sphere equation.
You could just combine all the images into a single texture using an image editing program. Then you would only have to apply a single texture over the entire sphere. To do this you would just need to find a way to import a model of a sphere into your program. (Unless you want to try generating one procedurally.)
The easiest way is creating texture images with alpha channel.
You can add many texture into shader code and blend them all by alpha as masking.