SDL2 - Rotating image with even width and height by 90 degree - c++

I'm working with c++ and SDL2 on a very basic game. Because i use very small images (max. 64x64) it is important for me to be pixel accurate. But when I try to rotate an e.g. 10x10 image by multiples of 90° they get shifted as shown in the following:
I guess that's happening because the center is picked as pixel (5,5), which is not the center but actually a little off to the right bottom. The actual center point should be the line between (4,4)and (5,5), but SDL doesn'T allow float values when rendering. Even though the 10x10 image is perfectly rotatable (works in e.g. Gimp) SDL doesn't seem to take the even dimensions of the texture into account.
So is there a way to correctly rotate images in SDL or do i have to check the angle by hand and if it is a multiple of 90° shift it back myself after rotation? For angles that are not multiples of 90° i don't care as much, because the image will be quite destorted when looking at the individual pixels and i you wouldn't recognise a shift by 1 pixel as easy.

Related

Mismatch between Point Projection and Warped 2-D Image [opencv]

I am using 2 different methods to render an image (as an opencv Matrix):
an implemented projection function that uses the camera intrinsics (focal length, principal point; distortion is disabled) - this function is used in other software packages and is supposed to work correctly (repository)
a 2D to 2D image warping (here, I'm determining the intersections of the corner-rays of my camera with my 2D image that should be warped into my camera frame); this backprojection of the corner points is using the same camera model as above
Now, I overlay these two images and what should basically happen is that a projected pen-tip (method 1.) should line up with a line that is drawn on the warped image (method 2.). However, this is not happening.
There is a tiny shift in both directions, depending on the orientation of the pen that is writing, and it is reduced when I am shifting the principal point of the camera. Now my question is, since I am not considering the principal point in the 2D-2D image warping, can this be the cause of the mismatch? Or is it generally impossible to align those two, since the image warping is a simplification of the projection process?
Grey Point: projected origin (should fall in line with the edges of the white area)
Blue Reticle: penTip that should "write" the Bordeaux-colored line
Grey Line: pen approximation
Red Edge: "x-axis" of white image part
Green Edge: "y-axis" of white image part
EDIT:
I also did the same projection with the origin of the coordinate system, and here, the mismatch grows, the further the origin moves out of the center of the image. (so delta[warp,project] gets larger on the image borders compare to the center)

Invalid cameras calibration for an head mounter Eye Tracking system

I'm working on an Eye Tracking system with two cameras mounted on some kind of glasses. There are optical lenses so that the screen is perceived at around 420 mm from the eye.
From a few dozen pupil samples, we compute two eye models (one for each camera), located in their respective camera coordinates system. This is based on the works here, but modified so that an estimation of the eye center is found using some kind of brute-force approach to minimize the ellipse projection error on the model given its center position in camera space.
Theorically, an approximation of the cameras parameters would be symetrical to the lenses on the Y axis. So every camera should be at the coordinates (around 17.5mm or -17.5, 0, 3.3) with respect to the lenses coordinates system, a rotation of around 42.5 degrees on the Y axis.
With the However, with these values, there is an offset in the result. See below:
The red point is the gaze center estimated by the left eye tracker, the white one is the right eye tracker, in screen coordinates
The screen limits are represented by the white lines.
The green line is the gaze vector, in camera coordinates (projected in 2D for visualization)
The two camera centers found, projected in 2D, are in the middle of the eye (the blue circle).
The pupil samples and current pupils are represented by the ellipses with matching colors.
The offset on x isn't constant which mean the rotation on Y is not exact. and the position of the camera aren't precise too. In order to fix it, we used: this to calibrate and then this to get the rotation parameters from the rotation matrix.
We added a camera on the middle of the lenses (Close to the theorical 0,0,0 point ?) to get the extrinsics and intrinsic parameters of the cameras, relative to our lens center. However, with about 50 checkerboard captures from different positions, the results given by OpenCV doesn't seems correct.
For example, it gives for a camera a position of about (-14,0,10) in lens coordinates for the translation and something like (-2.38, 49, -2.83) as rotation angles in degrees.
The previous screenshots are taken with theses parameters. The theorical ones are a bit further apart, but are more likely to reach the screen borders, unlike the opencv value.
This is probably because the test camera is in front of the optic, not behind, where our real 0,0,0 would be located (we just add the distance at which the screen is perceived on the Z axis afterwards, which is 420mm).
However, we have no way to put the camera in (0, 0, 0).
As the system is compact (everything is captured within a few cm^2), each degree or millimeter can change the result drastically so without the precise value the cameras, we're a bit stuck.
Our objective here is to find an accurate way to get the extrinsic and intrisic parameters of each cameras, so that we can compute a precise position of the center of the eye of the person wearing the glasses, without other calibration procedure than looking around (so no fixation points)
Right now, the system is precise enough so that we get a global indication on where someone is looking on the screen,but there is a divergence between the right and left camera, it's not precise enough. Any advice or hint that could help us is welcome :)

Film coordinate to world coordinate

I am working on building 3D point cloud from features matching using OpenCV3.1 and OpenGL.
I have implemented 1) Camera Calibration (Hence I am having Intrinsic Matrix of the camera) 2) Feature extraction( Hence I have 2D points in Pixel Coordinates).
I was going through few websites but generally all have suggested the flow for converting 3D object points to pixel points but I am doing completely backword projection. Here is the ppt that explains it well.
I have implemented film coordinates(u,v) from pixel coordinates(x,y)(With the help of intrisic matrix). Can anyone shed the light on how I can render "Z" of camera coordinate(X,Y,Z) from the film coordinate(x,y).
Please guide me on how I can utilize functions for the desired goal in OpenCV like solvePnP, recoverPose, findFundamentalMat, findEssentialMat.
With single camera and rotating object on fixed rotation platform I would implement something like this:
Each camera has resolution xs,ys and field of view FOV defined by two angles FOVx,FOVy so either check your camera data sheet or measure it. From that and perpendicular distance (z) you can convert any pixel position (x,y) to 3D coordinate relative to camera (x',y',z'). So first convert pixel position to angles:
ax = (x - (xs/2)) * FOVx / xs
ay = (y - (ys/2)) * FOVy / ys
and then compute cartesian position in 3D:
x' = distance * tan(ax)
y' = distance * tan(ay)
z' = distance
That is nice but on common image we do not know the distance. Luckily on such setup if we turn our object than any convex edge will make an maximum ax angle on the sides if crossing the perpendicular plane to camera. So check few frames and if maximal ax detected you can assume its an edge (or convex bump) of object positioned at distance.
If you also know the rotation angle ang of your platform (relative to your camera) Then you can compute the un-rotated position by using rotation formula around y axis (Ay matrix in the link) and known platform center position relative to camera (just subbstraction befor the un-rotation)... As I mention all this is just simple geometry.
In an nutshell:
obtain calibration data
FOVx,FOVy,xs,ys,distance. Some camera datasheets have only FOVx but if the pixels are rectangular you can compute the FOVy from resolution as
FOVx/FOVy = xs/ys
Beware with Multi resolution camera modes the FOV can be different for each resolution !!!
extract the silhouette of your object in the video for each frame
you can subbstract the background image to ease up the detection
obtain platform angle for each frame
so either use IRC data or place known markers on the rotation disc and detect/interpolate...
detect ax maximum
just inspect the x coordinate of the silhouette (for each y line of image separately) and if peak detected add its 3D position to your model. Let assume rotating rectangular box. Some of its frames could look like this:
So inspect one horizontal line on all frames and found the maximal ax. To improve accuracy you can do a close loop regulation loop by turning the platform until peak is found "exactly". Do this for all horizontal lines separately.
btw. if you detect no ax change over few frames that means circular shape with the same radius ... so you can handle each of such frame as ax maximum.
Easy as pie resulting in 3D point cloud. Which you can sort by platform angle to ease up conversion to mesh ... That angle can be also used as texture coordinate ...
But do not forget that you will lose some concave details that are hidden in the silhouette !!!
If this approach is not enough you can use this same setup for stereoscopic 3D reconstruction. Because each rotation behaves as new (known) camera position.
You can't, if all you have is 2D images from that single camera location.
In theory you could use heuristics to infer a Z stacking. But mathematically your problem is under defined and there's literally infinitely many different Z coordinates that would evaluate your constraints. You have to supply some extra information. For example you could move your camera around over several frames (Google "structure from motion") or you could use multiple cameras or use a camera that has a depth sensor and gives you complete XYZ tuples (Kinect or similar).
Update due to comment:
For every pixel in a 2D image there is an infinite number of points that is projected to it. The technical term for that is called a ray. If you have two 2D images of about the same volume of space each image's set of ray (one for each pixel) intersects with the set of rays corresponding to the other image. Which is to say, that if you determine the ray for a pixel in image #1 this maps to a line of pixels covered by that ray in image #2. Selecting a particular pixel along that line in image #2 will give you the XYZ tuple for that point.
Since you're rotating the object by a certain angle θ along a certain axis a between images, you actually have a lot of images to work with. All you have to do is deriving the camera location by an additional transformation (inverse(translate(-a)·rotate(θ)·translate(a)).
Then do the following: Select a image to start with. For the particular pixel you're interested in determine the ray it corresponds to. For that simply assume two Z values for the pixel. 0 and 1 work just fine. Transform them back into the space of your object, then project them into the view space of the next camera you chose to use; the result will be two points in the image plane (possibly outside the limits of the actual image, but that's not a problem). These two points define a line within that second image. Find the pixel along that line that matches the pixel on the first image you selected and project that back into the space as done with the first image. Due to numerical round-off errors you're not going to get a perfect intersection of the rays in 3D space, so find the point where the ray are the closest with each other (this involves solving a quadratic polynomial, which is trivial).
To select which pixel you want to match between images you can use some feature motion tracking algorithm, as used in video compression or similar. The basic idea is, that for every pixel a correlation of its surroundings is performed with the same region in the previous image. Where the correlation peaks is, where it likely was moved from into.
With this pixel tracking in place you can then derive the structure of the object. This is essentially what structure from motion does.

How to rotate a rect in SDL2?

I plan on making a game, and I want to create some background animations for said game. One of these animations is a rotating rectangle. I've looked all over, and I cannot find any form of math or logic that allows me to rotate a rectangle (SDL_Rect to be specific, but you might have already known that).
I can't figure out the math for myself, I really don't have any working code for this, so I can't show anything.
Essentially I'm looking for some type of logic that I can apply the rectangle's coordinates so that whenever the main game loop loops, it will rotate the rectangle some amount of degrees.
You can't rotate an SDL_Rect. If you look at its definition, it's made of coordinates for the top-left corner, the width and the height. There's no way to represent a rectangle with sides that aren't parallel to the coordinate system's axes.
SDL_RenderCopyEx supports drawing rotated textures, though.

Angle of object relative to the camera and video? Video and camera output different

I am wondering if I have got my thinking write about this, I have calibration done for my camera and now I want to get the angle of detected objects relative to the camera only on the x-axis, the horizontal.
I am thinking I can put some grid lines across the image at known pixel values and match those with know real world distances and calculate the angle per pixel that way, knowing the distances of the triangles. Starting at the centre of the image 0 degrees, and as we move towards the right +X degrees and towards the left -X pixels.
Assuming this is a correct way to go about it, for some reason the video I'm working with was recorded at 704x576 pixels, but when I plug the camera into my computer to work with it's 640x480 pixels and it's the same camera that made the recordings. I assume this will affect my results somewhat, with the calibration and definitely with the angle per pixel measurement that I want. I am working with OpenCV in C++, I am wondering if there's a way/function to adjust the screen size for when I call up the camera to 704x576 and if I then do my measurements at this size can I get a somewhat accurate angle per pixel measurement? Or do I need to do something else?
I'm still figuring my way around camera geometry and openCV, and any help would be much appreciated, thanks.
It is probably easier than you think. Say your camera has 60.0 deg horizontal field of view (FOV). Than each pixel along X axis is just 60.0/640 deg. You can easily calculate FOV by considering a right triangle with sides formed by a focal length vector and half of the screen width:
FOV = 2*atan(640/2, focal) where focal length is in pixels
for example, for focal=500 pixels
FOV = 2*atan(640/2, 500) = 1.14rad = 65.2deg
One thing to keep in mind is that focal length changes proportionally with screen resolution. For example, if you calculated focal=500 based on 640x320 image, then for 320x160 image focal=250.