Scale/shrink a number based on frame rate [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Say i have the number 32 at 60 frame rate
based on the frame rate how do i calculate and get the number if frame rate goes to 20 or 70?

Not entirely sure what you are trying to achieve here --
Given your example wherein X is equal to 32 when the FPS is 60 you could calculate the value of X when the FPS is 1 FPS(32) / X(60) this gives you a value of 0.53 with this value calculated you could declare a constant float that you can then use to calculate the value of X based on the FPS.
At 20 FPS the value of x would be 10.6 (0.53 * 20)
At 70 FPS the value of x would be 37.3 (0.53 * 70)

Related

Bluring an image in C++/C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I'm staring a project about image processing with C++.
The thing is that everything I find online about this matter (blurring an image in C++) comes either with CUDA or with OpenCV.
Is there a way to blur an image with C++ only? (for starters)
If yes, can somebody please share the code or explain?
Thanks!
Firstly you need the image in memory.
Then you need a second buffer to use as a workspace.
Then you need a filter. A common filter would be
1 4 1
4 -20 4
1 4 1
For each pixel, we apply the filter. So we're setting the image to a weighted average of the pixels around it, then subtracting to avoid the overall image going lighter or darker.
Applying a small filter is very simple.
for(y=0;y<height;y++)
for(x=0;x<width;x++)
{
total = image[(y+1)*width+x+1];
for(fy=0; fy < 3; fy++)
for(fx = 0; fx < 3; fx++)
total += image[(y+fy)*width+x+fx] * filter[fy*3+x];
output[(y+1)*width+x+1] = clamp(total, 0, 255);
}
You need to special case the edges, which is just fiddly but doesn't add any theoretical complexity.
When we use faster algorithms that the naive one it becomes important to set up edges correctly. You then do the calculations in the frequency domain and it's a lot faster with a big filter.
If you would like to implement the blurring on your own, you have to somehow store the image in memory. If you have a black and white image, an
unsigned char[width*height]
might be sufficient to store the image; if it is a colour image, perhaps you will have the same array, but three or four times the size (one for each colour channel and one for the so-called alpha-value which describes the opacity).
For the black and white case, you would have to sum up the neighbours of each pixel and calculate its average; this approach transfers to colour images by applying the operation to each colour channel.
The operation described above is a special case of the so-called kernel filter, which can also be used to implement different operations.

Is there any C++ opencv code to compute velocity of key feature points in each frame of video? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to compute velocity of key feature points and set thresholding for motion detection in a video.
I have a python code with me as follows:
def compute_vel_kf(self, fps):
if ((len(self.features) == 0) or (self.features == None)):
return;
test_image = self.current_frame.copy();
time_diff_in_sec = 1/fps;
self.v = [];
for i, p1 in enumerate(self.features):
p2 = self.features_prev[i];
# speed = dist/time
vx, vy = [(p1[0][0] - p2[0][0]), (p1[0][1] - p2[0][1])];
v = sqrt(vx * vx + vy * vy)*fps;
ang = math.atan2(vy, vx);
self.v.append(array([v, ang]));
i += 1;
return self.v;
I have to port it to cpp code. In cpp code i have used points[1] and points[2] that holds current frame & previous frame detected points respectively. I need to calculate velocity of the detected key feature points.
As Tobbe mentioned, you should first try to get some results with sample data, and then ask for help with what you have, about what you need next.
To give a brief answer, you should first install an image processing library like OpenCV, and then writ some sample code to load and process frames from your video. Then you can segment objects in the first frame, track them in the coming frames, and use the stats to calculate the velocity.
Edit: Now we can see that you already have the positions in the previous and the current frame. The usual method to get the velocity in pixels/second is to calculate the distance (Euclidian or separate axes, depending on your need) between to the two locations, and then multiply it by the frame rate. However, since the video is most likely taken at a many frames a second, you can also do a weighted averaging with the velocity from the previous frame pair.

Image processing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have several bitmaps images and want to segment and access the position (and want know the size) of the continuous tone very high correlated areas,I mean by continues tone areas (high correlated segments) only the the areas that hold the exact same values of pixels. I have experience in image processing and am using c++ and opencv but i didnt find a library doing that am afraid if I do the programming I will lose the performance and the calculation became inefficient while I need to process further a lot of things. in this time but due to 10 years of this science leaving I became clumsy and I cant find the answers as I were young, I will be grateful if you help me in any ideas about that because am stuck. thanks for the kind reading and help.
I can demonstrate the concept of my comment using ImageMagick, and this test image which has the useful property of being noise which means you can see it on SO's white background and the algorithm shouldn't see it.
I can average it over an area of 15x15 like this:
convert test.png -statistic mean 15x15 x.png
which gives this
then threshold and invert it so you can see the areas of continuous tone identified in white
convert test.png -statistic mean 15x15 test.png -compose difference -composite -depth 8 -threshold 1 -negate x.png
You can experiment with different widths and heights of the blurring box like this:
#!/bin/bash
for x in 3 7 15 25; do
for y in 3 7 15 25; do
convert -label "${x}x${y}" test.png -statistic mean ${x}x${y} miff:-
done
done | montage - -frame 5 -tile 4x out.png
which gives this:
and the corresponding masked image thus:
You can the pass that into the Connected Components Analysis like this:
convert test.png -statistic mean 5x5 \
test.png -compose difference -composite \
-depth 8 -threshold 1 -negate \
-define connected-components:verbose=true \
-define connected-components:area-threshold=20 \
-connected-components 8 -auto-level blobs.png
which will give you this which contains the coordinates of the blobs
Objects (id: bounding-box centroid area mean-color):
0: 500x500+0+0 270.8,271.7 177169 srgb(0,0,0)
1: 216x216+52+41 159.5,148.5 46656 srgb(255,255,255)
8: 114x114+63+351 119.5,407.5 10039 srgb(255,255,255)
2: 81x100+354+47 394.0,96.5 8100 srgb(255,255,255)
5: 49x49+348+204 372.0,228.0 2401 srgb(255,255,255)
6: 358x5+55+287 233.5,289.0 1790 srgb(255,255,255)
10: 45x45+244+383 265.9,405.0 1520 srgb(255,255,255)
3: 4x289+451+181 452.5,325.0 1156 srgb(255,255,255)
7: 122x4+57+309 117.5,310.5 488 srgb(255,255,255)
9: 4x114+416+356 417.5,412.5 456 srgb(255,255,255)
4: 15x15+312+185 319.0,192.0 225 srgb(255,255,255)
I can then outline the detected areas on top of the original image:

C++ Marching Cubes Algorithm Explanation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to understand how the marching cubes algorithm works.
Source:
http://paulbourke.net/geometry/polygonise/
What i don't understand is how do you calculate the "GRIDCELL" values. To be exact the
double val[8];
part is not clear for me what it actually supposed to contain.
typedef struct {
XYZ p[8];
double val[8];
} GRIDCELL;
As i understand XYZ p[8]; are the vertex coordinates for the output cube. But what val[8]; is?
The marching cubes algorithm is -- as explained in the linked description -- an algorithm to build a polygonal representation from sampled data. The
double val[8];
are the samples for the 8 vertices of the cube. So they are not computed they are measurements from e.g. MRI scans. So the algorithm is the other way around: take a set of measured numbers and construct a surface representation for visualization from it.
Te val is the level of "charge" for each vertex of the cell, it depends of the tipe of shape that you want to creae.
f.e.: if you want to made a ball you can sample the values with the formula:
for (int l = 0; l < 8; ++l){
float distance = sqrtf(pow(cell.p[l].x - chargepos.x, 2.0) + pow(cell.p[l].y - chargepos.y, 2.0) + pow(cell.p[l].z - chargepos.z, 2.0));
cell.val[l] = chargevalue /pow(distance, 2.0);}
After further reading and research the explanation is quite simple.
First off all:
A voxel represents a value on a regular grid in three-dimensional space.
This value simply represents the so called "isosurface". Or in other words the density of the space.
double val[8];
To simplify:
Basically this should be a value between -1.0f to 0.0f.
Where -1.0f means solid and 0.0f empty space.
For ISO values a perlin/simplex noise can be used for example.

opengl :how to compute transformation matrix? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to compute the opengl transformation matrix that transforms a rectangle A,B,C,D into the polygon A',B,C,D (that differs from the first one for 1 point).
How can i do that?
First you need to formalize the problem. You have a matrix M and 4 points that get transformed to another 4 points.
M*A = A'
M*B = B
M*C = C
M*D = D
Every line can be written as 4 equations. For example:
M11*A1 + M12*A2 + M13*A3 + M14*A4 = A'1
M21*A1 + M22*A2 + M23*A3 + M24*A4 = A'2
...
As a result you get 16 linear equations that can be solved with the Guassian elimination. http://en.wikipedia.org/wiki/Gaussian_elimination
Thanks for your answer.
I implemented your solution, but unfortunately it finds a general transformation matrix, and not always an affine transofmation (which i need).
I finally solved my problem using opencv::estimateAffine3D http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#estimateaffine3d