finding object size from speed and change in pixel width - python-2.7

This is very likely a duplicate question but I could not find what I was looking for.
I'm trying to determine the size of an object from the speed of the object(actually the camera is moving but I would guess that it calculates the same), and a change in pixel width . Any help would be appreciated.
speed is roughly 2 m/s
change in pixel width is roughly 3 pixels/frame

I figured out what the problem was:
X width(meters) = ((speed * focal length) / change in pixel width)^.5

Related

Given a section size and a radial angle, make it so that each section is a different color?

So I have a number of objects inside a circle that I want to color based on its radial angle from the center point. I also would like to be able to pass in the desired section size in angles. So if the section size is 10, then every ten angles would be a different color. So far, I have a way to figure out a color given the angle but, it doesn't really restrict it at all. Essentially, every angle is a different color.
R = 256 * Cos(angleValue);
G = 256 * Cos(angleValue + 120);
B = 256 * Cos(angleValue - 120);
I was wondering if anyone would have an idea on how to divide the color wheel into different sections? It would be a bonus but not a requirement if neighboring colors were easily distinguishable. (i.e. red next to blue or something similar)
Or if I am going about this a totally wrong way please feel free to provide any feedback. It would be appreciated.

OpenCV - Getting a part of an image

I want to get a part of an image loaded in another image. There are several, easy ways to do that but for example cv::Mat OutImage = Image(cv::Rect(7,47,1912,980)) but- the resulted image is to large For example:
I got an image with 1920 x 1024 pixel. I want to cut a cv:Rect(7,47,1912,980) from it. I would suggest, that the resulting image has the size (1912 - 7 = 1905) x (980 - 47 = 933) pixel but it has 1912 x 980. It seems, that Opencv is just cutting on the right lower side and keeping the left upper area.
The dimension of the image is important, because in the next step I'd like to perform a substraction which is only valid if the Mat object has the same dimension. I also don't want to use a loop designed by myself, because performance is very important.
Any ideas?
Regards,
Jan
It is actually cv:Rect(x,y,width,height), so you should set the last two parameters as your willing output width and height. Mind the range you set or it would cause errors.
I had also dealed with this issue I will just give my example here it is working for me well. You may also try this one.
Rect const box(100, 295, 400, 185); //this mean the first corner is
//(x,y)=(100,295)
// and the second corner is
//(x + b, y+c )= (100 +400,295+185)
Mat ROI = frame(box);

Set colour limit axis in OpenCV 4 (c++) akin to Matlab's CAXIS

Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.
I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map
i.e. Colour map using "JET"
When brightness = 1, red = 255
When brightness = 10, red >= 25
The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)
Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?
A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!
I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.
I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!
In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:
int minVal = 0, maxVal = 80;
cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);
If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT
If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:
template<class T>
T customColorMapper(T input_pixel)
{
T output_pixel = 0;
// do something with output_pixel basing on intput_pixel
return output_pixel;
}
and apply it to each source image pixel like:
cv::Mat dst_image = src_image.clone(); //copy data
dst_image.forEach<TYPE>([](TYPE& input_pixel, const int* pos_row_col) -> void {
input_pixel = customColorMapper<TYPE>(input_pixel);
});
of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.
Hope this helps!
I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.
my code looked something like this
cv::minMaxLoc(dst, &min, &max);
double axisThreshold = floor(max / contrastLevel);
for (int i = 0; i < dst.rows; i++)
{
for (int j = 0; j < dst.cols; j++)
{
short pixel = dst.at<short>(i, j);
if (pixel >= axisThreshold)
{
pixel = USHRT_MAX;
}
else
{
pixel *= (USHRT_MAX / axisThreshold);
}
dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
}
}
In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).
When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing
calculatedThreshold = Max pixel value / contrast
Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by
scale = MAX Pixel Value / calculatedThreshold.
TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!
My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.
Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.
I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.
I will update this question If i found more suitable OPENCV functions to achieve what I want.

Disparity Map Block Matching

I am writing a disparity matching algorithm using block matching, but I am not sure how to find the corresponding pixel values in the secondary image.
Given a square window of some size, what techniques exist to find the corresponding pixels? Do I need to use feature matching algorithms or is there a simpler method, such as summing the pixel values and determining whether they are within some threshold, or perhaps converting the pixel values to binary strings where the values are either greater than or less than the center pixel?
I'm going to assume you're talking about Stereo Disparity, in which case you will likely want to use a simple Sum of Absolute Differences (read that wiki article before you continue here). You should also read this tutorial by Chris McCormick before you read more here.
side note: SAD is not the only method, but it's really common and should solve your problem.
You already have the right idea. Make windows, move windows, sum pixels, find minimums. So I'll give you what I think might help:
To start:
If you have color images, first you will want to convert them to black and white. In python you might use a simple function like this per pixel, where x is a pixel that contains RGB.
def rgb_to_bw(x):
return int(x[0]*0.299 + x[1]*0.587 + x[2]*0.114)
You will want this to be black and white to make the SAD easier to computer. If you're wondering why you don't loose significant information from this, you might be interested in learning what a Bayer Filter is. The Bayer Filter, which is typically RGGB, also explains the multiplication ratios of the Red, Green, and Blue portions of the pixel.
Calculating the SAD:
You already mentioned that you have a window of some size, which is exactly what you want to do. Let's say this window is n x n in size. You would also have some window in your left image WL and some window in your right image WR. The idea is to find the pair that has the smallest SAD.
So, for each left window pixel pl at some location in the window (x,y) you would the absolute value of difference of the right window pixel pr also located at (x,y). you would also want some running value, which is the sum of these absolute differences. In sudo code:
SAD = 0
from x = 0 to n:
from y = 0 to n:
SAD = SAD + absolute_value|pl - pr|
After you calculate the SAD for this pair of windows, WL and WR you will want to "slide" WR to a new location and calculate another SAD. You want to find the pair of WL and WR with the smallest SAD - which you can think of as being the most similar windows. In other words, the WL and WR with the smallest SAD are "matched". When you have the minimum SAD for the current WL you will "slide" WL and repeat.
Disparity is calculated by the distance between the matched WL and WR. For visualization, you can scale this distance to be between 0-255 and output that to another image. I posted 3 images below to show you this.
Typical Results:
Left Image:
Right Image:
Calculated Disparity (from the left image):
you can get test images here: http://vision.middlebury.edu/stereo/data/scenes2003/

Marker and figure size in matplotlib : not sure how it works

I want to make a figure that marker's size depend on the size of the figure. That way, using square marker size, no matter what resolution or figure size you choose, all the markers will touch each other, masking the backgroud without overlapping. Here is where I am at:
The marker size is specified in pt^2, with 1pt=1/72inch, the resolution in Pixel Per Inches, and the figure size in pixels (also the proportion that main subplot represent out of the main figure size : 0.8). So, if my graph's limits are lim_min and lim_max, I should by able to get the corresponding marker size using :
marker_size=((fig_size*0.8*72/Resolution)/(lim_max-lim_min))**2
because (fig_size*0.8*72/Resolution) is the size of the figure in points, and (lim_max-lim_min) the number of marker I want to fill a line.
And that should do the trick !... Well it doesn't... At all... The marker are so small they are invisible without a zoom. And I don't get why.
I understand this my not be the best way, and the way you would do it, but I see no reason why it wouldn't work, so I want to understand where I am wrong.
PS : both my main figure and my subplot are squares
Edit :
Okay so I found the reason of the problem, not the solution. The problem in the confusion between ppi and dpi. Matplotlib set the resolution in dpi, which is defined as a unit specific to scanner or printer depending on the model (?!?).
Needless to say I am extremely confused on the actual meaning of the resolution in matplotlib. It simply makes absolutely no sens to me. Please someone help. How do i convert this to a meaningful unit ? It seems that matplotlib website is completely silent on the matter.
If you specify the figure size in inches and matplotlib uses a resolution of 72 points per inch (ppi), then for a given number of markers the width of each marker should be size_in_inches * points_per_inch / number_of_markers points (assuming for now that the subplot uses the entire figure)? As I see it, dpi is only used to display or save the figure in a size of size_in_inches * dpi pixels.
If I understand your goal correctly, the code below should reproduce the required behavior:
# Figure settings
fig_size_inch = 3
fig_ppi = 72
margin = 0.12
subplot_fraction = 1 - 2*margin
# Plot settings
lim_max = 10
lim_min = 2
n_markers = lim_max-lim_min
# Centers of each marker
xy = np.arange(lim_min+0.5, lim_max, 1)
# Size of the marker, in points^2
marker_size = (subplot_fraction * fig_size_inch * fig_ppi / n_markers)**2
fig = pl.figure(figsize=(fig_size_inch, fig_size_inch))
fig.subplots_adjust(margin, margin, 1-margin, 1-margin, 0, 0)
# Create n_markers^2 colors
cc = pl.cm.Paired(np.linspace(0,1,n_markers*n_markers))
# Plot each marker (I could/should have left out the loops...)
for i in range(n_markers):
for j in range(n_markers):
ij=i+j*n_markers
pl.scatter(xy[i], xy[j], s=marker_size, marker='s', color=cc[ij])
pl.xlim(lim_min, lim_max)
pl.ylim(lim_min, lim_max)
This is more or less the same as you wrote (in the calculation of marker_size), except the division by Resolution has been left out.
Result:
Or when settings fig_ppi incorrectly to 60: