Is SFML::View inverted-y axis standard? How to workaround it? - opengl

I have my objects in a world space using right-handed coordinates (x = right, y = up).
When I have to render them using SFML I'm having problems because I can't set up the View matrix in sf::View with a (y = up) matrix and then all is rendered y-flipped.
One solution I'm thinking is flipping the y-axis on every object before rendering it:
ObjectTransformMatrix * MatrixScale(1.0f,-1.0f)
But I think I will have to move the sf::View center to:
y = y - (view_size.y / 2.0)
Why sf::View is y-inverted ? Is my solution correct?

Why sf::View is y-inverted ?
Most graphics packages/libraries have their screen-space coordinate system with origin on the left-top corner, where X goes right and Y goes down. This is just a convention and SFML happens to pick this. Mind you, this isn't left or right-handed; that would depend on the third axes, if any. I'll take that the other coordinate system you're referring to is the conventional mathematical coordinate system.
flipping the y-axis on every object before rendering it
Don't do this! You've a world defined to your convenience. Why change that when you can change the camera (sf::View) transform which will be implicitly applied to all rendered objects internally. From the documentation:
sf::View defines a camera in the 2D scene.
This is a very powerful concept: you can scroll, rotate or zoom the entire scene without altering the way that your drawable objects are drawn.
[...]
To apply a view, you have to assign it to the render target. Then, every objects drawn in this render target will be affected by the view until you use another view.
Essentially you would be setting the below derived matrix as the camera's transform but through the functions exposed via sf::View.
Is my solution correct?
Partly right, you've guessed the rest. Flipping the axes is only part of the solution, you should also translate the origin to the right position. What you need is Mm→s where m is the math space and s is the screen space. To find that you need to transform the screen space coordinate system to align with the math coordinate system. Since the scales are the same in both coordinate systems, we can use the values of width W and height H (originally from the screen space) as-is.
We've this:
S--->---- W ---------+
| |
v |
| |
| |
| |
|
| H
|
| |
| |
^ |
| |
M--->----------------+
When we do S1, −1 i.e. scale X-axis by 1 and Y-axis by −1 (flip Y), we have
^
|
S--->---- W ---------+
| |
| |
| |
| |
| |
|
| H
|
| |
| |
^ |
| |
M--->----------------+
This new system is no longer S since its Y is flipped; lets call it S'. Now we've to translate (move) its origin to reach M. Since we're transforming coordinate systems, and not points, we've to do it with respect to S', the transformed intermediate coordinate system and not S.
We do T0, -H i.e. move H units along negative Y. We will end up with
+-------- W ---------+
| |
| |
| |
| |
| |
|
| H
|
| |
| |
^ |
| |
O--->----------------+
where both M and S are at O.
We've to concatenate S and T to get the final Mm→s. Since we're transforming coordinate systems, and not points, we've to post-multiply (assuming you're using column-vector convention).
Mm→s = S1, −1 T0, -H
| 1 0 0 | | 1 0 0 | | 1 0 0 |
| 0 −1 0 | | 0 1 −H | = | 0 −1 H |
| 0 0 1 | | 0 0 1 | | 0 0 1 |
Say we've a screen that's 5×5 (for simplicity). Transform the point (1, 1) in the world space to screen space:
| 1 0 0 | |1| |1|
| 0 −1 5 | |1| = |4|
| 0 0 1 | |1| |1|
(1, 4) is the point's coordinates in screen space.
If you're following row-vector convention, you've to transpose the equation, M = AB i.e. MT = BT AT. That would give us
| 1 0 0 | | 1 0 0 | | 1 0 0 |
| 0 1 0 | | 0 −1 0 | = | 0 −1 0 |
| 0 −H 1 | | 0 0 1 | | 0 H 1 |

I just reversed the window height. I think that's it.
sf::View view = window.getDefaultView();
view.setSize(WINDOW_WIDTH, -WINDOW_HEIGHT);
window.setView(view);

I found that the best way is simply to flip the y position coordinate before and after rendering a sprite to the window:
void draw(window)
{
pos = sprite.getPos();
sprite.setPos(pos.x, pos.y * -1);
window.draw(sprite);
sprite.setPos(pos.x, pos.y * -1);
}

Related

How to assemble wavefront .obj data into element and vertex arrays of minimal size?

I'm having trouble putting the data inside a wavefront .obj file together.
These are the vec3 and vec2 definitions
template <typename T>
struct vec3 {
T x;
T y;
T z;
};
template <typename T>
struct vec2 {
T x;
T y;
};
Used in a vector:
+-----------------------------------+--------------+--------------+-------+
| std::vector<vec3<uint32_t>> f_vec | 0 | 1 | (...) |
+-----------------------------------+--------------+--------------+-------+
| | v_vec_index | v_vec_index | (...) |
| +--------------+--------------+-------+
| | vt_vec_index | vt_vec_index | (...) |
| +--------------+--------------+-------+
| | vn_vec_index | vn_vec_index | (...) |
+-----------------------------------+--------------+--------------+-------+
Where:
v_vec_index is an index of std::vector<vec3<float>> v_vec with its fields containing vertex x, y and z coordinates
vt_vec_index is an index of std::vector<vec2<float>> vt_vec containing texture u and v coordinates
vn_vec_index is an index of std::vector<vec3<float>> vn_vec with normal x, y and z coordinates
Every f_vec field is used to create a sequence of vert_x, vert_y, vert_z, tex_u, tex_v, norm_x, norm_y, norm_z float values inside std::vector<float> vertex_array.
Also, every index of f_vec's field is by default a value of std::vector<uint32_t>> element_array - that is it contains the range of integers from 0 to f_vec.size() - 1.
The problem is vec3 fields inside f_vec may repeat. So in order to assemble only the unique sequences mentioned above I planned to turn something like this:
+-----------------+---+---+---+---+---+
| f_vec | 0 | 1 | 2 | 3 | 4 |
+-----------------+---+---+---+---+---+
| | 1 | 3 | 1 | 3 | 4 |
| +---+---+---+---+---+
| | 2 | 2 | 2 | 2 | 5 |
| +---+---+---+---+---+
| | 2 | 4 | 2 | 4 | 5 |
+-----------------+---+---+---+---+---+
Into this:
+------------------------+-----------------+---+---+---+---+---+
| whatever that would be | index | 0 | 1 | 2 | 3 | 4 |
+------------------------+-----------------+---+---+---+---+---+
| | key | 0 | 1 | 0 | 1 | 2 |
| +-----------------+---+---+---+---+---+
| | | 1 | 3 | 1 | 3 | 4 |
| | +---+---+---+---+---+
| | vec3 of indices | 2 | 2 | 2 | 2 | 5 |
| | +---+---+---+---+---+
| | | 2 | 4 | 2 | 4 | 5 |
+------------------------+-----------------+---+---+---+---+---+
Where every time an element of f_vec would be put into the "whatever container"
It would be checked if it is unique
If it is then it would be pushed to the end of the container with its key being the next natural number after the biggest key - the key's value would be pushed to the element_array and new vertex would be created inside vertex_array
If it isn't then it would be pushed to the end of the container with its key being the same as the key of its duplicate - the key's value would be pushed to the element_array but vertex_array would remain unchanged
How am I supposed to do it?

How to find rotation and translation (transformation matrix) between corresponding points in two different coorinate systens

I am working on a camera-lidar calibration and I am stuck for some time on the following problem:
I am using a usb camera and a 2D lidar. I have the coordinates of the corresponding points in both lidar frame and camera frame (lets say that I have 3 points and their coordinates in lidar frame and the coordinate of the same 3 points in camera frame).
Example for one point:
lidar_pt1(xl, yl)
camera_pt1(xc, yc, zc)
...
are known.
If I hardcode the transformation matrix I get an expected result. Now I am trying not to hardcode it, but to automatically calculate it using the known values of the coordinates. What I have is 3 points in 2D coordinates in the lidar frame and exact 3 points in as 3D coordinates in the camera frame. It is here where I am struggling with the math to somehow calculate the rotation based on the coordinates value that I have. Is there a way to get that rotation?
camera_pt1 = TransformMat * lidarpt1
TransformMat = ?
I saw some examples using SVD (http://nghiaho.com/?page_id=671) but I think they require bigger sets of data and the minimum of 3 points would not give the best result.
If you only take 3 pairs of coordinates from each system, then the maths is quite straightforward. Here's a simple example:
|
4 | (R)
| : ',
| : ',
| : ',
3 | : (P)
| : ,'
| : ,'
| : ,'
2 | (A).....(B) (Q)
| : ,'
| : ,'
| : ,'
1 | (C)
|
|
|
0 +-------------------------------------
0 1 2 3 4
Suppose you have a triangle ABC that maps to another triangle PQR. You can represent their vertices in homogeneous coordinates as follows:
.- -. .- -.
| 1 2 1 | | 4 3 1 |
ABC = | 2 2 1 | PQR = | 3 2 1 |
| 1 1 1 | | 3 4 1 |
'- -' '- -'
You need to find a matrix M that maps ABC onto PQR (i.e., ABC × M = PQR). To do this, just multiply PQR by the inverse of ABC:
if ABC × M = PQR,
then ABC⁻¹ × ABC × M = ABC⁻¹ × PQR
so M = ABC⁻¹ × PQR
There are plenty of references available on how to invert a 3×3 matrix. This should give you the following result:
.- -.
| -1 -1 0 |
M = | 1 -1 0 |
| 3 6 1 |
'- -'

How to get Y value at a given X value of a trendline

I have created a trend-line using the lpoly command (local polynomial smoothed trendline).
I want to find what the y value of that trend-line is at any given x value.
How can I do this?
One can do this using the generate() option of the lpoly command:
webuse motorcycle, clear
lpoly accel time, generate(x y)
The values are stored in the y and x variables (here showing the first 10 observations):
list y x in 1/10
+------------------------+
| y x |
|------------------------|
1. | -1.6245329 2.4000001 |
2. | -1.775922 3.5265307 |
3. | -1.9832878 4.6530613 |
4. | -2.2217888 5.7795918 |
5. | -2.3814197 6.9061224 |
|------------------------|
6. | -2.5199665 8.032653 |
7. | -3.3919962 9.1591836 |
8. | -8.8572222 10.285714 |
9. | -16.957709 11.412245 |
10. | -26.693355 12.538775 |
+------------------------+
If these two variables are then plotted, it can be seen that this is indeed the case:
twoway line y x

State transition diagram for reader writer problem

I'm not understanding my professor means when he says the write flag and read flag. Does 0 mean it is triggered?
He wants us to draw a state transition diagram but I think I can do that myself if I knew what was going on.
+---------+------------+-----------+----------------+
| Counter | Write flag | Read flag | Interpretation |
+---------+------------+-----------+----------------+
| 0 | 0 | 0 | Write locked |
| 0 | 0 | 1 | Invalid |
| 0 | 1 | 0 | Invalid |
| 0 | 1 | 1 | Available |
| N | 0 | 0 | Write request |
| N | 0 | 1 | Read locked |
| N | 1 | 0 | Invalid |
| N | 1 | 1 | Invalid |
+---------+------------+-----------+----------------+
The write flag and the read flag are each a boolean value, meaning it can hold a 0 or a 1. The state appears to be defined by the value of the counter and the two flags. I think your professor is asking that you draw a state diagram that shows transitions between different counter/flag value combinations. (My guess is that the intent is that you collapse all the counter>0 sub-states into a single sub-state labeled counter=N.)

Transform scaling doesn't seem to work

I am implementing a column-major transformation matrix that looks something like this:
|----------| |------------| |------------|
| 0 3 6 9 | | RS R R X | | RS R R X |
| 1 4 7 10 | | R RS R Y | | R RS R Y |
| 2 5 8 11 | | R R RS Z | | R R RS Z |
|----------| |------------| | 0 0 0 1 |
|------------|
I understand that scaling is supposed to be applied to positions 0, 4, and 8, but it doesn't seem to work. I set the orientation from a quaternion, set the position as appropriate, and then attempt to multiply in my scaling to positions 0, 4, and 8. When this transform is fed into OpenGL, my shapes stretch and squash and do not scale appropriately. Am I missing something here, I thought scaling was a simple multiplication along the diagonals? My orientation application is relatively straightforward, but adding the scaling operation to it results in strange sheering and squashing effects. What am I doing wrong?
The scaling matrix you have in mind is only useful for either only scaling, or multiplying it to an already existing transformation. As soon as the base transformation is not identity the sale factors apply on the whole upper left 3x3. Just evaluate the multiplication
/ Rxx Rxy Rxz \ / Sx 0 0 \
| Ryx Ryy Ryz | * | 0 Sy 0 |
\ Rzy Rzy Rzz / \ 0 0 Sz /
=
/ Rxx·Sx Rxy·Sy Rxz·Sz \
| Ryx·Sx Ryy·Sy Ryz·Sz |
\ Rzx·Sx Rzy·Sy Rzz·Sz /