Cairo flips a drawing - c++

I want to draw a triangle and text using C++ and Cairo like this:
|\
| \
|PP\
|___\
If I add the triangle and the text using Cairo I get:
___
| /
|PP/
| /
|/
So the y-axis is from top to bottom, but I want it from bottom to top. So I tried to changed the viewpoint matrix (cairo_transform(p, &mat);) or scale the data (cairo_scale(p, 1.0, -1.0);). I get:
|\
| \
|bb\
|___\
Now the triangle is the way I want it BUT the TEXT is MIRRORED, which I do not want to be mirrored.
Any idea how to handle this problem?

I was in a similar situation as the OP that required me to change a variety of coordinates in the cartesian coordinate system with the origin at the bottom left. (I had to port an old video game that was developed with a coordinate system different from Cairo's, and because of time constraints/possible calculation mistakes/port precision I decided it was better to not rewrite the whole bunch) Luckily, I found an okay approach to change Cairo's coordinate system. The approach is based around Cairo's internal transformation matrix, that transforms Cairo's input to the user device. The solution was to change this matrix to a reflection matrix, a matrix that mirrors it's input through the x-axis, like so:
cairo_t *cr;
cairo_matrix_t x_reflection_matrix;
cairo_matrix_init_identity(&x_reflection_matrix); // could not find a oneliner
/* reflection through the x axis equals the identity matrix with the bottom
left value negated */
x_reflection_matrix.yy = -1.0;
cairo_set_matrix(cr, &x_reflection_matrix);
// This would result in your drawing being done on top of the destination
// surface, so we translate the surface down the full height
cairo_translate(cr, 0, SURFACE_HEIGHT); // replace SURFACE_HEIGHT
// ... do your drawing
There is one catch however: text will also get mirrored. To solve this, one could alter the font transformation matrix. The required code for this would be:
cairo_matrix_t font_reflection_matrix;
// We first set the size, and then change it to a reflection matrix
cairo_set_font_size(cr, YOUR_SIZE);
cairo_get_font_matrix(cr, &font_reflection_matrix);
// reverse mirror the font drawing matrix
font_reflection_matrix.yy = font_reflection_matrix.yy * -1;
cairo_set_font_matrix(cr, &font_reflection_matrix);

Answer:
Rethink your coordinates and pass them correctly to cairo. If your coordinates source has an inverted axis, preprocess them to flip the geometry. That would be called glue code, and it is ofter neccessary.
Stuff:
It is a very common thing with 2D computer graphics to have the origin (0,0) in the top left corner and the y-axis heading downwards (see gimp/photoshop, positioning in html, webgl canvas). As allways there are other examples too (PDFs).
I'm not sure what the reason is, but I would assume the reading direction on paper (from top to bottom) and/or the process of rendering/drawing an image on a screen.
To me, it seems to be the easiest way to procedurally draw an image at some position from the first to the last pixel (you don't need to precalculate it's size).
I don't think that you are alone with your oppinion. But I don't think that there is a standard math coordinate system. Even the very common carthesian coordinate system is incomplete when the arrows that indicate axis direction are missing.

Summary: From the discussion I assume that there is only one coordinate system used by Cairo: x-axis to the right, y-axis down. If one needs a standard math coordinate system (x-axis to the right, y-axis up) one has to preprocess the data.

Related

Getting 3D world coordinate from (x,y) pixel coordinates

I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3

'Ray' creation for raypicking not fully working

I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose

What is wrong with this attempt to render rotated ellipses in Qt?

1. Goal
My colleague and I have been trying to render rotated ellipsoids in Qt. The typical solution approach, as we understand it, consists of shifting the center of the ellipsoids to the origin of the coordinate system, doing the rotation there, and shifting back:
http://qt-project.org/doc/qt-4.8/qml-rotation.html
2. Sample Code
Based on the solution outlined in the link above, we came up with the following sample code:
// Constructs and destructors
RIEllipse(QRect rect, RIShape* parent, bool isFilled = false)
: RIShape(parent, isFilled), _rect(rect), _angle(30)
{}
// Main functionality
virtual Status draw(QPainter& painter)
{
const QPen& prevPen = painter.pen();
painter.setPen(getContColor());
const QBrush& prevBrush = painter.brush();
painter.setBrush(getFillBrush(Qt::SolidPattern));
// Get rectangle center
QPoint center = _rect.center();
// Center the ellipse at the origin (0,0)
painter.translate(-center.x(), -center.y());
// Rotate the ellipse around its center
painter.rotate(_angle);
// Move the rotated ellipse back to its initial location
painter.translate(center.x(), center.y());
// Draw the ellipse rotated around its center
painter.drawEllipse(_rect);
painter.setBrush(prevBrush);
painter.setPen(prevPen);
return IL_SUCCESS;
}
As you can see, we have hard coded the rotation angle to 30 degrees in this test sample.
3. Observations
The ellipses come out at wrong positions, oftentimes outside the canvas area.
4. Question
What is wrong about the sample code above?
Best regards,
Baldur
P.S. Thanks in advance for any constructive response?
P.P.S. Prior to posting this message, we searched around quite a bit on stackoverflow.com.
Qt image move/rotation seemed to reflect a solution approach similar to the link above.
In painter.translate(center.x(), center.y()); you shift your object by the amount of current coordinate which makes (2*center.x(), 2*center.y()) as a result. You may need:
painter.translate(- center.x(), - center.y());
The theory of moving an object back to its origin, rotating and then replacing the object's position is correct. However, the code you've presented is not translating and rotating the object at all, but translating and rotating the painter. In the example question that you've referred to, they're wanting to rotate the whole image about an object, which is why they move the painter to the object's centre before rotating.
The easiest way to do rotations about a GraphicsItem is to initially define the item with its centre in the centre of the object, rather than in its top left corner. That way, any rotation will automatically be about the objects centre, without any need to translate the object.
To do this, you'd define the item with a bounding rect for x,y,width,height with (-width/2, -height/2, width, height).
Alternatively, assuming your item is inherited from QGraphicsItem or QGraphicsObject, you can use the function setTransformOriginPoint before any rotation.

QT Graph plotting, how to change coordinate system/geometry of QGraphicsView

I am using QT and try to plot a graph with QGraphicsView and QGraphicsScene..i dont want any additional dependencies, thats why i dont use QWT. When i plot my data, at the moment i use
scene->drawLine(x1,y1,x2,y2,pen);
then this draws a line between the 2 points in the QGraphicScene. But this uses a top left x=0 y=0 system... i would like to use my own system like in the picture below. Another problem is that i have double values from -3 to +3..has anyone some experience with QGraphicScene and QGraphicsView and can tell me how i can accomplish this?
Try looking into QGraphicsView::setTransform. This matrix defines how "scene coordinates" are translated to "view coordinates", it's a common concept in graphics programming.
There are also convenience functions scale(), rotate(), translate() and shear() which modify the view's current transformation matrix.
So you could do something like this:
scale(1.0, -1.0); // invert Y axis
// translate to account for fact that Y coordinates start at -3
translate(0.0, 3.0);
// scale Y coordinates to height of widget
scale(1.0, (qreal)viewport()->size().height()/6.0);
And since this is dependent on the size of the widget, you'd also want to catch any resize events and reset the transformation matrix again there. Assuming you want "3" to represent the top of the viewport and "-3" the bottom.

Rotating OpenGL scene in 2 axes

I have a scene which contains objects located anywhere in space and I'm making a trackball-like interface.
I'd like to make it so that I can move 2 separate sliders to rotate it in x and y axes respectively:
glRotatef(drawRotateY,0.0,1.0f,0);
glRotatef(drawRotateX,1.0f,0.0,0.0);
//draw stuff in space
However, the above code won't work because X rotation will then be dependent on the Y rotation.
How can I achieve this without using gluLookAt()?
Edit:
I'd like to say that my implementation is even simpler than a trackball interface. Basically, if the x slider value is 80 and y slider is 60, rotate vertically 80 degrees and horizontally 60 degrees. I just need to make them independent of each other!
This code should get you started: http://www.cse.ohio-state.edu/~crawfis/Graphics/VirtualTrackball.html
It explains how to implement a virtual trackball using the current mouse position in the GL window.
You could probably use something like this:
Vector v = new Vector(drawRotateX, drawRotateY, 0);
float length = v.length();
v.normalize();
glRotatef(length, v.x, v.y, v.z);
When you say rotate vertically and horizontally, do you mean like an anti-aircraft gun - rotate around the vertical Z axis, to face in a particular compass heading (yaw) and then rotate to a particular elevation (pitch)?
If this is the case, then you just need to do your two rotations in the right order, and all will be well. In this example, you must do the 'pitch' rotation first, and then the 'yaw' rotation. All will work out fine.
If you mean something more complicated (eg. like the 'Cobra' spaceship in Elite) then you will need a more fiddly solution.