Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to compute the opengl transformation matrix that transforms a rectangle A,B,C,D into the polygon A',B,C,D (that differs from the first one for 1 point).
How can i do that?
First you need to formalize the problem. You have a matrix M and 4 points that get transformed to another 4 points.
M*A = A'
M*B = B
M*C = C
M*D = D
Every line can be written as 4 equations. For example:
M11*A1 + M12*A2 + M13*A3 + M14*A4 = A'1
M21*A1 + M22*A2 + M23*A3 + M24*A4 = A'2
...
As a result you get 16 linear equations that can be solved with the Guassian elimination. http://en.wikipedia.org/wiki/Gaussian_elimination
Thanks for your answer.
I implemented your solution, but unfortunately it finds a general transformation matrix, and not always an affine transofmation (which i need).
I finally solved my problem using opencv::estimateAffine3D http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#estimateaffine3d
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Say i have the number 32 at 60 frame rate
based on the frame rate how do i calculate and get the number if frame rate goes to 20 or 70?
Not entirely sure what you are trying to achieve here --
Given your example wherein X is equal to 32 when the FPS is 60 you could calculate the value of X when the FPS is 1 FPS(32) / X(60) this gives you a value of 0.53 with this value calculated you could declare a constant float that you can then use to calculate the value of X based on the FPS.
At 20 FPS the value of x would be 10.6 (0.53 * 20)
At 70 FPS the value of x would be 37.3 (0.53 * 70)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I'm staring a project about image processing with C++.
The thing is that everything I find online about this matter (blurring an image in C++) comes either with CUDA or with OpenCV.
Is there a way to blur an image with C++ only? (for starters)
If yes, can somebody please share the code or explain?
Thanks!
Firstly you need the image in memory.
Then you need a second buffer to use as a workspace.
Then you need a filter. A common filter would be
1 4 1
4 -20 4
1 4 1
For each pixel, we apply the filter. So we're setting the image to a weighted average of the pixels around it, then subtracting to avoid the overall image going lighter or darker.
Applying a small filter is very simple.
for(y=0;y<height;y++)
for(x=0;x<width;x++)
{
total = image[(y+1)*width+x+1];
for(fy=0; fy < 3; fy++)
for(fx = 0; fx < 3; fx++)
total += image[(y+fy)*width+x+fx] * filter[fy*3+x];
output[(y+1)*width+x+1] = clamp(total, 0, 255);
}
You need to special case the edges, which is just fiddly but doesn't add any theoretical complexity.
When we use faster algorithms that the naive one it becomes important to set up edges correctly. You then do the calculations in the frequency domain and it's a lot faster with a big filter.
If you would like to implement the blurring on your own, you have to somehow store the image in memory. If you have a black and white image, an
unsigned char[width*height]
might be sufficient to store the image; if it is a colour image, perhaps you will have the same array, but three or four times the size (one for each colour channel and one for the so-called alpha-value which describes the opacity).
For the black and white case, you would have to sum up the neighbours of each pixel and calculate its average; this approach transfers to colour images by applying the operation to each colour channel.
The operation described above is a special case of the so-called kernel filter, which can also be used to implement different operations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to compute velocity of key feature points and set thresholding for motion detection in a video.
I have a python code with me as follows:
def compute_vel_kf(self, fps):
if ((len(self.features) == 0) or (self.features == None)):
return;
test_image = self.current_frame.copy();
time_diff_in_sec = 1/fps;
self.v = [];
for i, p1 in enumerate(self.features):
p2 = self.features_prev[i];
# speed = dist/time
vx, vy = [(p1[0][0] - p2[0][0]), (p1[0][1] - p2[0][1])];
v = sqrt(vx * vx + vy * vy)*fps;
ang = math.atan2(vy, vx);
self.v.append(array([v, ang]));
i += 1;
return self.v;
I have to port it to cpp code. In cpp code i have used points[1] and points[2] that holds current frame & previous frame detected points respectively. I need to calculate velocity of the detected key feature points.
As Tobbe mentioned, you should first try to get some results with sample data, and then ask for help with what you have, about what you need next.
To give a brief answer, you should first install an image processing library like OpenCV, and then writ some sample code to load and process frames from your video. Then you can segment objects in the first frame, track them in the coming frames, and use the stats to calculate the velocity.
Edit: Now we can see that you already have the positions in the previous and the current frame. The usual method to get the velocity in pixels/second is to calculate the distance (Euclidian or separate axes, depending on your need) between to the two locations, and then multiply it by the frame rate. However, since the video is most likely taken at a many frames a second, you can also do a weighted averaging with the velocity from the previous frame pair.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to understand how the marching cubes algorithm works.
Source:
http://paulbourke.net/geometry/polygonise/
What i don't understand is how do you calculate the "GRIDCELL" values. To be exact the
double val[8];
part is not clear for me what it actually supposed to contain.
typedef struct {
XYZ p[8];
double val[8];
} GRIDCELL;
As i understand XYZ p[8]; are the vertex coordinates for the output cube. But what val[8]; is?
The marching cubes algorithm is -- as explained in the linked description -- an algorithm to build a polygonal representation from sampled data. The
double val[8];
are the samples for the 8 vertices of the cube. So they are not computed they are measurements from e.g. MRI scans. So the algorithm is the other way around: take a set of measured numbers and construct a surface representation for visualization from it.
Te val is the level of "charge" for each vertex of the cell, it depends of the tipe of shape that you want to creae.
f.e.: if you want to made a ball you can sample the values with the formula:
for (int l = 0; l < 8; ++l){
float distance = sqrtf(pow(cell.p[l].x - chargepos.x, 2.0) + pow(cell.p[l].y - chargepos.y, 2.0) + pow(cell.p[l].z - chargepos.z, 2.0));
cell.val[l] = chargevalue /pow(distance, 2.0);}
After further reading and research the explanation is quite simple.
First off all:
A voxel represents a value on a regular grid in three-dimensional space.
This value simply represents the so called "isosurface". Or in other words the density of the space.
double val[8];
To simplify:
Basically this should be a value between -1.0f to 0.0f.
Where -1.0f means solid and 0.0f empty space.
For ISO values a perlin/simplex noise can be used for example.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I don't understand how to convert sRGB to CIELab and backward. Help me please. it's desirable in с++ code
Convert from sRGB to RGB by applying inverse gamma, convert RGB to XYZ using 3x3 matrix, convert XYZ to Lab using standard formula and D6500 white point
References:
Lab http://en.wikipedia.org/wiki/Lab_color_space
sRBG http://en.wikipedia.org/wiki/SRGB_color_space
The rest... you can do on your own :-)
In case, I have prepared a list of links (in different programming languages) which can be helpful for the conversion process (sRGB to LAB and back) and also, conversion of sRGB to linear RGB. Linear RGB can be further used for white balance and color calibration of an image (provided color patch, like Macbeth color chart).
Interesting links:
(i) Understanding sRGB and linear RGB space: http://filmicgames.com/archives/299; http://www.cambridgeincolour.com/tutorials/gamma-correction.htm
(ii) MATLAB tutorial: https://de.mathworks.com/help/vision/ref/colorspaceconversion.html
(iii) Python package: http://pydoc.net/Python/pwkit/0.2.1/pwkit.colormaps/
(iv) C code: http://svn.int64.org/viewvc/int64/colors/color.c?view=markup
(v) OpenCV does not provide sRGB to linear RGB conversion but it does the conversion inside color.cpp code (OpenCV_DIR\modules\imgproc\src\color.cpp). Check out method called initLabTabs(), there is a gamma encoding and decoding. OpenCV color conversion API: http://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html