Blit a rectangle at by coordinate - python-2.7

In Pygame and Python 2.7, how can I blit a rectangle at certain point represented by a set of coordinates?
I know I can use this:
screen.blit(img.image, img.rect.topleft)
But I want the rectangle to be at a precise point on the screen.

If you need topleft corner of rectangle in point (34,57) you can do
screen.blit(img.image, (34,57) )
or this
img.rect.topleft = (34,57)
screen.blit(img.image, img.rect)
or this
img.rect.x = 34
img.rect.y = 57
screen.blit(img.image, img.rect)
If you need center of rectange in point (34,57)
img.rect.center = (34,57)
screen.blit(img.image, img.rect)
If you need rectangle in center of the screen:
(especially useful if you need to show text (ex. "PAUSE") in center of the screen, or text in center of rectangle to create button)
img.rect.center = screen.get_rect().center
screen.blit(img.image, img.rect)
If you need rectangle touching right border of the screen:
img.rect.right = screen.get_rect().right
screen.blit(img.image, img.rect)
If you need rectangle in bottom left corner of the screen:
img.rect.bottomleft = screen.get_rect().bottomleft
screen.blit(img.image, img.rect)
And you have more - see pygame.Rect
x,y
top, left, bottom, right
topleft, bottomleft, topright, bottomright
midtop, midleft, midbottom, midright
center, centerx, centery
Using above element it doesn't change width and height.
If you change x (or other value) then you automaticly get new value of left, right and others.
BTW: As you see you can use img.rect as argument in blit()
BTW: You can also do this: (for example in __init__):
img.rect = img.image.get_rect(center=screen.get_rect().center)
to center object on screen
BTW: You can use it also to blit image/Surface on other Surface at a precise point. You can put text in center of some surface (for example: button) and then that surface put in bottomright corner of the screen

From your code:
screen.blit(img.image, img.rect.topleft)
will put the image at (0, 0) since the rect has been obtained from an image that has not yet been drawn to the display surface. If you want to draw at a specific coordinate simply do this:
screen.blit(image, (x, y)) #x and y are the respective position coordinates

Related

Why is the screen space coordinate system for my sfml rendered app inverted?

I am learning C++ and I thought I'd make the original asteroids game with a fresh coat of paint using the SFML graphics library. However, for my player sprite, while the origin is at the top left corner of the screen, to the right of it is the negative x axis and downwards is negative y axis (opposite of what it's supposed to be in both cases). Also, no matter what object or rotation, invoking the setRotation function always rotates any object about the top left corner of the screen even if, for that object, I have set the origin to the object's center.
#include<SFML\Graphics.hpp>
using namespace sf;
const int W{ 1200 }, H{ 800 };
const float degToRad = 0.017453f;
int main() {
float x{ -600 }, y{ -400 };
float dx{}, dy{}, angle{};
bool thrust;
RenderWindow app(VideoMode(W, H), "Asteroids!");
app.setFramerateLimit(60);
Texture t1, t2;
t1.loadFromFile("images/spaceship.png");
t2.loadFromFile("images/background.jpg");
Sprite sPlayer(t1), sBackground(t2);
sPlayer.setTextureRect(IntRect(40, 0, 40, 40));
sPlayer.setOrigin(-600, -400);
while (app.isOpen())
{
app.clear();
app.draw(sPlayer);
app.display();
}
return 0;
}
The above code draws the player (spaceship.png) to the center of my rendered window (app) but notice how I have had to put in negative coordinates. Also, if I further put in the code for taking keyboard inputs and call the setRotation function, instead of rotating my sPlayer sprite about its center (i.e. (-600,-400)), it rotates the sprite about the top left corner of the screen which is (0,0). I can't find any explanation for this in the SFML online documentation. What should I do?
As I mentioned I have tried reading the documentation. I've watched online tutorials but to no avail.
Origin is the point on sprite where you "hold" it.
Position is the point on screen where you put Origin of the sprite.
In short, you take your sprite by Origin and put it so Origin is on Position.
By default, both Origin and Position are (0, 0), so top left of your sprite is put at top left of the screen. What you did was to say "take this point on sprite, which is way to the upper-left that actual visible part of the sprite is and put it to the top left of the screen". This had an effect of moving your sprite to the bottom right.
You probably want something like:
// This is will make sure that Origin, i.e. point which defines rotation and other transformations center is at center of the ship
sPlayer.setOrigin(sprite_width / 2, sprite_height / 2);
// This will put Origin (center of the ship) at center of the screen
sPlayer.setPosition(screen_width / 2, screen_height / 2);

Getting the QTransform of a resizable selection area

I have built a small custom qml item that is used as a selection area (something like the QRubberBand component provided in Qt Widgets). The item also give the ability to user to resize the content of the selection, so by grabbing the bottom corner of the selection rectangle it is possible to drag to enlarge the content. After the user has done resizing I would like to compute the QTransform matrix of the transformation. QTransform provides a convenient QTransform::scale method to get a scale transformation matrix (which I can use by comparing the width and height ratio with the previous size of the selection). The problem is that QTransform::scale assumes that the center point of the transformation is the center of the object, but I would like my transformation origin to be the top left of the selection (since the user is dragging from the bottom-right).
So for example, if I have the following code:
QRectF selectionRect = QRectF(QPointF(10,10), QPointF(200,100));
// let's resize the rectangle by changing its bottom-right corner
auto newSelectionRect = selectionRect;
newSelectionRect.setBottomRight(QPointF(250, 120));
QTransform t;
t.scale(newSelectionRect.width()/selectionRect.width(), newSelectionRect.height()/selectionRect.height());
The problem here is that if I apply the transformation t to my original selectionRect I don't get my new rectangle newSelectionRect back, but I get the following:
QRectF selectionRect = QRectF(QPointF(10,10)*sx, QPointF(200,100)*sy);
where sx and sy are the scale factors of the transform. I would like a way to compute the QTransform of my transformation that gives back newSelectionRect when applied to selectionRect.
The problem lies in this assumption:
QTransform::scale assumes that the center point of the transformation is the center of the object
All transformations performed by QTransform are referred to the origin of the axis, is just an application of various tranformation matrixes (https://en.wikipedia.org/wiki/Transformation_matrix):
Also, QTransform::translate (https://doc.qt.io/qt-5/qtransform.html#translate) states:
Moves the coordinate system dx along the x axis and dy along the y axis, and returns a reference to the matrix.
Thereby, what you are looking for is:
QTransform t;
t.translate(+10, +10); // Move the origin to the top left corner of the rectangle
t.scale(newSelectionRect.width()/selectionRect.width(), newSelectionRect.height()/selectionRect.height()); // scale
t.translate(-10, -10); // move the origin back to where it was
QRectF resultRect = t.mapRect(selectionRect); // resultRect == newSelectionRect!

how to scale graphics properly?

Now I need to draw some polylines according to their coordinates. These are coordinates of one poltline:
1.15109497070313E+02 2.73440704345703E+01
1.15115196228027E+02 2.73563938140869E+01
1.15112876892090E+02 2.73697128295898E+01
1.15108222961426E+02 2.73687496185303E+01
1.15081001281738E+02 2.73908023834229E+01
1.15078292846680E+02 2.73949108123779E+01
1.15073806762695E+02 2.74090080261230E+01
1.15063293457031E+02 2.74221019744873E+01
1.15059646606445E+02 2.74324569702148E+01
I've drawn these polylines and moved them to the center of window:
QPainter painter(this);
QPainterPath path;
for (auto& arc : layer.getArcs()) {
for (int i = 0; i < arc.pts_draw.size() - 1; i++)
{
QPolygonF polygon = QPolygonF(arc.pts_draw);
path.addPolygon(polygon);
}
}
// move all polylines to the center of window
QPointF offset = rect().center() - path.boundingRect().center();
painter.translate(offset);
painter.drawPath(path);
However, what I got in the window was this:
I think it's caused by the coordinates. All coordinates are very close to each other so the graphics will become too small when drawn in the window. So my problem is how to scale the graphics properly? In other words, how can I know the ratio of scaling?
On the QGraphicsView you can call scale(qreal sx, qreal sy) to scale the QGraphicsScene and all it's QGraphicsItems. If you wish to scale each item individually instead of the entire scene, then take each point in the polygon and use Euclidian geometry scaling to scale your polygon. Or you could use something called QTransform like this post did

OpenGL - get mouse position co-ordinates

I am making a 2D board game. the game board grid is 8x8 and each cell of the grid is an object. So a board consists of 64 cell objects. My aim is to work out which cell the mouse is in. I am attempting this by tracking the mouse coordinates and comparing it to the grid coordinates.
my coordinate system is as follows:
gluOrtho2D(-4,4,-4,4);
I am trying to get the current mouse position by using the following in my update function:
POINT p
if (GetCursorPos(&p)){
}
if (ScreenToClient(hWnd, &p))
{
}
However although this is tracking the coordinates of the mouse it is not correctly tracking the world coordinates that I set with gluOrtho2D. How can I achieve this?
It depends on your glViewPort
Let's say you have:
glViewport(0,0, 640, 640);
The mouse position is (mousePos.x,mousePos.y) and the world position you want to know is (world.x, world.y)
And, give that the top/left corner of your screen is the (0, 0) coordinate
Then we can make the following:
world.x = -4.0 + (mousePos.x / 640.0) * (4*2)
world.y = 4.0 - (mousePos.y / 640.0) * (4*2)
What we are doing here is a linear interpolation using the normalize position of the mouse within the screen (mousePos.x / 640) and then multiplying this value to the width of the word (4*2).
Given that the top/left corner of the grid start at (-4, 4), we add the offset of the world position.

How glOrtho2D Works

glViewport(0, 0, w, h);
gluOrtho2D(10, 100, 10, 100);
I am not getting any output, can you help me?
If I set it to
gluOrtho2D(-50, 50,-50, 50);
Then object created at center of window.
gluOrtho2D(left, right, top, bottom) defines a view box. That means that there is no perspective.
With
gluOrtho2D(-50,50,-50,50);
you say (assuming you have no additional model view matrix):
The viewport's left edge is located at world position x = -50
The viewport's rightedge is located at world position x = +50
The viewport's top edge is located at world position y = -50
The viewport's bottomedge is located at world position y = +50
If then your object appears at the screen's center, it is probably located at the world's origin.
If you specify
gluOrtho2D(10,100,10,100);
then you can see an x-range of 10 to 100. The origin is not within this range and therefore not visible (it is beyond the left edge of the viewport).
Note that there are apparently two different signatures of this function based on the API:
gluOrtho2D(left, right, bottom, top) and gluOrtho2D(left, right, top, bottom).