As seen in the picture we have a rectangle which is 2 Dimensional in a 3D space.
When I rotate said rectangle (Slightly rotated) so that the normal of this plate is perpendicular to the camera, it vanishes (Heavily rotated Rectangle just before vanishing). The intuition would tell me, that the plate is supposed to show a single line of pixels, however this is not the case (but certainly what I'm trying to achieve). This Problem seems so trivial, but I can't find a solution to it.
I am using 2 different methods to render an image (as an opencv Matrix):
an implemented projection function that uses the camera intrinsics (focal length, principal point; distortion is disabled) - this function is used in other software packages and is supposed to work correctly (repository)
a 2D to 2D image warping (here, I'm determining the intersections of the corner-rays of my camera with my 2D image that should be warped into my camera frame); this backprojection of the corner points is using the same camera model as above
Now, I overlay these two images and what should basically happen is that a projected pen-tip (method 1.) should line up with a line that is drawn on the warped image (method 2.). However, this is not happening.
There is a tiny shift in both directions, depending on the orientation of the pen that is writing, and it is reduced when I am shifting the principal point of the camera. Now my question is, since I am not considering the principal point in the 2D-2D image warping, can this be the cause of the mismatch? Or is it generally impossible to align those two, since the image warping is a simplification of the projection process?
Grey Point: projected origin (should fall in line with the edges of the white area)
Blue Reticle: penTip that should "write" the Bordeaux-colored line
Grey Line: pen approximation
Red Edge: "x-axis" of white image part
Green Edge: "y-axis" of white image part
EDIT:
I also did the same projection with the origin of the coordinate system, and here, the mismatch grows, the further the origin moves out of the center of the image. (so delta[warp,project] gets larger on the image borders compare to the center)
I'm working with c++ and SDL2 on a very basic game. Because i use very small images (max. 64x64) it is important for me to be pixel accurate. But when I try to rotate an e.g. 10x10 image by multiples of 90° they get shifted as shown in the following:
I guess that's happening because the center is picked as pixel (5,5), which is not the center but actually a little off to the right bottom. The actual center point should be the line between (4,4)and (5,5), but SDL doesn'T allow float values when rendering. Even though the 10x10 image is perfectly rotatable (works in e.g. Gimp) SDL doesn't seem to take the even dimensions of the texture into account.
So is there a way to correctly rotate images in SDL or do i have to check the angle by hand and if it is a multiple of 90° shift it back myself after rotation? For angles that are not multiples of 90° i don't care as much, because the image will be quite destorted when looking at the individual pixels and i you wouldn't recognise a shift by 1 pixel as easy.
I'm struggling with this problem:
I have the an image and I want to apply a warp perspective to it (I already have the transformation matrix) but instead of the output only having the transformation area (like the example below) I want to be able to see the whole image instead.
EXAMPLE http://docs.opencv.org/trunk/_images/perspective.jpg
Instead of having the transformation region, like this example, I want to transform the whole original image.
How can I achieve this?
Thanks!
It seems that you are computing the perspective transform by selecting the corners of the sudoku grid in the input image and requesting them to be warped at fixed location in the output image. In your example, it seems that you are requesting the top-left corner to be warped at coordinates (0,0), the top-right corner at (300,0), the bottom-left at (0,300) and the bottom-right at (300,300).
This will always result in the cropping of the image area on the left of the two left corners and above the two top corners (i.e. the image area where x<0 or y<0 in the output image). Also, if you specify an output image size of 300x300, this results in the cropping of the image area on the right to the right corners and below the bottom corners.
If you want to keep the whole image, you need to use different output coordinates for the corners. For example warp TLC to (100, 100), TRC to (400,100), BLC to (100,400) and BRC to (400,400), and specify an output image size of 600x600 for instance.
You can also calculate the optimal coordinates as follows:
Compute the default perspective transform H0 (as you are doing now)
Transform the corners of the input image using H0, and compute the minimum and maximum values for the x and y coordinates of these corners. Let's denote them xmin, xmax, ymin, ymax.
Compute the translation necessary to map the point (xmin,ymin) to (0,0). The matrix of this translation is T = [1, 0, -xmin; 0, 1, -ymin; 0, 0, 1].
Compute the optimised perspective transform H1 = T*H0 and specify an output image size of (xmax-xmin) x (ymax-ymin).
This way, you are guaranteed that:
the four corners of your input sudoku grid will form a true square
the output image will be translated so that no useful image data is cropped above or to the left of the grid corners
the output image will be have sized so that no useful image data is cropped below or to the right of the grid corners
However, this will generate black areas since the ouput image is no longer a perfect rectangle, hence some pixels in the output image won't have any correspondence in the input image.
Edit 1: If you want to replace the black areas with something else, you can initialize the destination matrix as you wish and then set the borderMode parameter of the warpPerspective function to BORDER_TRANSPARENT.
So I have no idea how I should be doing what I want do so I'll explain as best as I can.
http://i.stack.imgur.com/j65H8.jpg
So imagine that entire image is a 2d square 128x128 and each color I want to apply a texture to that part of the 2d square. Also I want it to stretch as well so Red, Aqua, Green and Purple never stretch in any direction but Pink stretches all directions and then Grey, Yellow, Black and Orange stretch in the longest direction (grey/orange = width expands, yellow/black = height expands). When stretched it should look like this:
http://i.stack.imgur.com/wJiKv.jpg
Also I am using C++.