matrix image processing in OpenGL CE - opengl

im trying to create an image filter in OpenGL CE. currently I am trying to create a series of 4x4 matrices and multiply them together. then use glColorMask and glColor4f to adjust the image accordingly. I've been able to integrate hue rotation, saturation, and brightness. but i am having trouble adding contrast. thus far google hasn't been to helpful. I've found a few matrices but they don't seem to work. do you guys have any ideas?

I have to say, I haven't heard of using a 4x4 matrix to do brightness or contrast. I think of these operations as being done on the histogram of the image, rather than on a local per-pixel basis.
Say, for instance, that your image has values from 0 to 200, and you want to make it brighter. You can then add values to the image, and what is shown on the screen will be brighter. If you want to enhance the contrast on that image, you would do a multiplication like:
(image_value - original_min)/(original_max - original_min) * (new_max - new_min) + new_min
if you want your new min to be 0 and your new max to be 255, then that equation would stretch the contrast accordingly. The original_min and original_max do not have to be the actual min and max of the entire image, the could be the min and max of a subsection of the image, if you want to enhance a particular area and don't mind clipping values above or below your new_min/new_max.
I suppose if you already know your range and so forth, you could incorporate that formula into a 4x4 matrix to achieve your goal, but only after you've done a pass to find the min and max of the original image.
I would also make sure to uncouple the display of your image from your image data; the above operations are destructive, in that you'll lose information, so you want to keep the original and display a copy.

Related

Finding regions of higher numbers in a matrix

I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/

Get HU values along a trajectory volume

So, what I am trying to do is to calculate the density profile (HU) along a trajectory (represented by target x,y,z and tangent to it) in a CT. At the moment, I am able to get the profile along a line passing through the target and at a certain distance from the target (entrance). What I would like to do is to get the density profile for a volume (cylinder in this case) of width 1mm or so.
I guess I have to do interpolation of some sort along voxels since depending on the spacing between successive coordinates, several coordinates can point to the same index. For example, this is what I am talking about.
Additionally, I would like to get the density profile for different shapes of the tip of the trajectory, for example:
My idea is that I make a 3 by 3 matrix, representing the shapes of the tip, and convolve this with the voxel values to get HU values corresponding to the tip. How can I do this using ITK/VTK?
Kindly let me know if you need some more information. (I hope the images are clear enough).
If you want to calculate the density drill tip will encounter, it is probably easiest to create a mask of the tip's cutting surface in a resolution higher than your image. Define a transform matrix M which puts your drill into the wanted position in the CT image.
Then iterate through all the non-zero voxels in the mask, transform indices to physical points, apply transform M to them, sample (evaluate) the value in the CT image at that point using an interpolator, multiply it by the mask's opacity (in case of non-binary mask) and add the value to the running sum.
At the end your running sum will represent the total encountered density. This density sum will be dependent on the resolution of your mask of the tip's cutting surface. I don't know how you will relate it to some physical quantity (like resisting force in Newtons).
To get a profile along some path, you would use resample filter. Set up a transform matrix which transforms your starting point to 0,0,0 and your end point to x,0,0. Set the size of the target image to x,1,1 and spacing the same as in source image.
I don't understand your second question. To get HU value at the tip, you would sample that point using a high quality interpolator (example using linear interpolator). I don't get why would the shape of the tip matter.

OpenCV weight approach for correspondence search and disparities C++

I have an OpenCV application and I have to implement a correspondence search using varying support weights between two image pair. This work is very similar to "Adaptive Support-Weight Approach for Correspondence Search" by Kuk-Jin Yoon and In So Kweon. The support weights are in a given support window.
I calculate dissimilarity between pixels using the support weights in the two images. Dissimilarity between pixel 'p' and 'Pd' is given by
where Pd and Qd are the pixels in the target image when pixels p and q in the reference image have a disparity value d; Np and Npd are the support weight.
After this, the disparity of each pixel is selected by the WTA (Winner-Takes-All) method as:
What I would like to know is how to proceed starting with the formula of the fig.1 (function computing dissimilarity and weights that I have written), i.e. which pixel to consider? Where to start? What pixel with? Any suggestion?
The final result of the work should be similar to:
What could be a good way to do it?
UPDATE1
Should I start creating a new image, and then consider the cost between the pixel (0,0) and all the other pixels, find the minimum value and set this value as the value in the new image at pixel (0,0) ? And so on with the other pixels?

computing likelihood of pixel belonging to an object using gradient orientation

I am working on object estimation on a image, that is tracking. Basically, I use gradient orientation as a feature descriptor of each pixel. I compute the gradient of each pixel and bin the orientation into 9-bin histogram therefore each pixel in the image is represented by a 9-dimension vector.
At initialization step a static foreground and background models are constructed as above.
Now the problem I have is that the background and the foreground are composed of many pixels (say k), therefore I will k x 9 dimension histograms for each pixels. How can I compute the likelihood of each pixel such that I can determine if it belongs to the foreground or background.
If the background and foreground models are constructed using a single histogram than it I can use something like compareHist in opencv. However the tracking result is very poor so I want to work at the pixel level. I cannot think of an appropriate method to compute probabilities for the method I stated above.
Is there any efficient way to do this? One way is to do One vs All (in the model) comparison but this is too exhaustive search approach and is computationally expensive.

How to find that image is more or less homogeneous w.r.t color (hue)?

UPDATE:
I have segmented the image into different regions. For each region, I need to know whether it is more or less homogeneous in terms of color.
What could be the possible strategies to do so?
previous:
I want to check the color variance (preferably hue variance) of an image to find out the images made up of homogeneous colors (i.e. the images which have only one or two color).
I understand that one strategy could be to create a hue-histogram for that and then I can found the count of each color but I have several images altogether and I cannot create a hue-histogram of 180 bins for each image because then it would be computationally expensive for whole code.
Is there any inbuilt openCV method OR other simpler method to find out whether the image consist of homogeneous color only OR several colors?
Something, which can calculate the variance of hue-image would also be fine. I could not find something like variance(image);
PS: I am writing the code in C++.
The variance can be computed without an histogram, as the average squared values minus the square of the averaged values. It takes a single pass over the image, with two accumulators. Choose a data type that will not overflow.