Transformations igrnores sf::Sprite's origin - c++

Transforming a sprite in SFML, does not regard it's new origin.
In my case sf::Sprite is rotating around the axis that is in the left top corner ({0,0}) regardless its origin. Setting new origin with .setOrigin() earlier takes no effect.
I am sure that sprite is getting the right origin position earlier which is center of rectangle.
My code:
In each of my Card class constructors I set the origin of my sprite.
card_sprite.setOrigin(Card::get_default_single_card_size().x*Game::get_scale()/2,Card::get_default_single_card_size().y*Game::get_scale()/2);
And then in my Deck class which behaves like std::stack of Cards I use function:
void Deck::push(const Card& crd)
{
push_back(crd);
..//
std::default_random_engine generator;
std::uniform_real_distribution<float> distributor(0,360);
top().setRotation(distributor(generator));
}
Card::setRotaion() looks like this, (which stil rotates card around top left corner) :
void Card::setRotation(float angle)
{
card_sprite.setRotation(angle);
}
Thanks for help in advance

Edit: Actually most methods in sf::Transform accept extra arguments to specify a center for the transformation, as per https://stackoverflow.com/users/7703024/super 's comment on my question on the same theme : How to "set the origin" of a Transform in sfml
I'm not too sure from your code, but I might've come up against a similar problem.
I "solved" it (in a very not ideal way) by replacing every call to a sfml drawing function with a call to a custom function when using sf::Transforms.
eg: instead of doing something like:
window.draw(thing, my_transform);
I had to do :
draw_transformed(thing, my_transform, window)
Where the code of draw_transformed looks like this:
void draw_transformed (sf::Shape const& thing, sf::Transform const& t, sf::RenderWindow& window) // cf note (1)
{
sf::Vector2f pos = thing.getPosition();
sf::Transform go_to_zero;
go_to_zero.translate(-pos);
sf::Transform go_back;
go_back.translate(pos);
sf::Transform conjugated_transform = go_back * t * go_to_zero ;
window.draw(thing, conjugated_transform);
}
(1) we can't use sf::Drawable as the type of thing because in sfml not all drawable things have a getPosition method, so we have to overload the function or do something "complicated" to go beyond this example.

Related

SFML: How to check if a point is contained in a group of transformed drawables

Context
I have a class representing a text box. the text box contains a header, some text and a rectangle to enclose the box. It only displays itself (for now):
struct Textbox : public sf::Drawable, public sf::Transformable{
sf::Text header;
sf::Text text;
sf::RectangleShape border;
Textbox(){
// set relative locations of the members
header.setPosition(0,0);
auto header_bounds = header.getGlobalBounds();
// the text should be just below the header
text.setPosition(0, header_bounds.top + header_bounds.height);
auto source_bounds = text.getGlobalBounds();
// this function just returns a rectangle enclosing two rectangles
sf::FloatRect rect = enclosing_rect(header_bounds, source_bounds);
// this function sets the position, width and length of border to be equal to rect's.
setRectParams(border, rect);
}
void draw(sf::RenderTarget& target, sf::RenderStates states){
states.transform = getTransform();
target.draw(header,states);
target.draw(text,states);
target.draw(border,states);
};
The Problem
What I want
I want to add a contains method. It should return true if coor is inside the border of the box. Here is my naive implementation:
bool Textbox::contains(sf::Vector2i coor) const {
return border.getGlobalBounds().contains(coor.x, coor.y);
}
Why this implementation doesn't work
This implementation breaks when I move the Textbox via the Transformable non-virtual functions. The Textbox moves and it also draws the shapes as transformed. But! It does not actually transform them! it only displays them as transformed. So the border doesn't even know it has been moved.
Possible solutions
I can add all the functions of the Transformable API to this class, thus shadowing them and calling transform by myself on each of the members. I don;t like this because it make me write sooo much more code than I wanted. It also raises the question of how to tackle the double transforms (the one for the Textbox and the others for it's members).
I can write a completely different class Group that holds a vector of drawables and transformables and it has all that shadowing API mechanism. All that is left is to inherit from it. This doesn't actually sound that bad.
I heard about Entity System Component - it's just sound pretty overkill.
I can apply the transform when contains is called. The function is const - it's a query. Also, it's bad design to update your data on seemingly random calls.
just as before just that the transform applies to a function-local rectangle. This smells too - why do I call the transform functions on the whole Textbox just so it would apply them on every method call (so far just it's draw and contains but down the line who knows)
Make the members mutable and somehow transform them inside the draw method. This smell hackish.
The question
How do I group transformations onto multiple entities via an ergonomic API?
The only method that you really need to 'change', but to be fair add on your own is getGlobalBounds().
When you are inheriting from sf::Transformable, sf::Drawable you should treat the base class (your Textbox struct) as a shape itself therfore you just need to call myTextbox.getGlobalBounds().contains(x,y), where myTextbox is a Textbox.
Using your own code:
struct Textbox : public sf::Drawable, public sf::Transformable{
sf::Text header;
sf::Text text;
sf::RectangleShape border;
sf::FloatRect getGlobalBounds() const {
auto header_bounds = header.getGlobalBounds();
auto source_bounds = text.getGlobalBounds();
sf::FloatRect rect = enclosing_rect(header_bounds, source_bounds);
//Don't really know what it does but let say that it returns Top and Left as 0, and calculates Height, Width.
return sf::FloatRect(getPosition(), sf::Vector2f(rect.width,rect.height));
}
};
But you still have to manage the rotation, resizing,etc. when calculating globalBounds.
EDIT:
One way to implement rotation and scaling.
sf::FloatRect getGlobalBounds() const {
auto header_bounds = header.getGlobalBounds();
auto source_bounds = text.getGlobalBounds();
sf::FloatRect rect = enclosing_rect(header_bounds, source_bounds);
//Don't really know what it does but let say that it returns Top and Left as 0, and calculates Height, Width.
sf::RectangleShape textbox(sf::Vector2f(rect.width, rect.height));
//at this point textbox = globalBounds of Textbox without transformations
textbox.setOrigin(getOrigin());//setOrigin (point of transformation) before transforming
textbox.setScale(getScale());
textbox.setRotation(getRotation());
textbox.setPosition(getPosition());
//after transformation get the bounds
return textbox.getGlobalBounds();
}
The solution might be much more simple than you expect. Instead of applying all the transforms to the transformable children/members, just de-transform the point you want to check (take it to local space).
Try this:
bool Textbox::contains(sf::Vector2i coor) const {
// Get point in the local space of the rectangle
sf::Transform inverseTr = this->getInverseTransform();
sf::Vector2f pointAsLocal = inverseTr.transformPoint(coor.x, coor.y);
// Check if the point, now in local space, is containted in the rectangle
return border.getLocalBounds().contains(pointAsLocal);
// ^
// Important! Use local bounds here, not global
}
Why does this work?
Math!
When you work with transformation matrices, you can think of them as portals between spaces. You have a local space where no transformation have been applied, and you have a final space, where all transformations are applied.
The problem with global bounds of a transformable member is that they belong neither to the local space nor the final space. They are just a rectangle bounding the shape in a possibly intermediate space where this bounds doesn't even take rotation into account.
What we are doing here is taking the coordinates, that exist in the final space, and take them to the local space of the rectangle, thanks to the inverse transformation matrix. So no matter how many translations, rotations or scales (or even skews, if you have customized the matrix) you apply to the rectangle. The inverse matrix takes the point to a new space where you can just check if it belongs, as if no transformation have ever been applied.

Why is there no QTransform::fromRotate?

Qt's QTransform offers some more optimized ways to construct a translated/scaled QTransform matrix using these static methods:
QTransform::fromScale
QTransform::fromTranslate
Now I need a rotatated transform and I thought it would be nice to also have a QTransform::fromRotate. But this one does not exist.
In my case I am modifying a existing transform accordingly to mouse interaction like paning, zooming and also rotating.
void MapDrawingItem::wheelEvent(QWheelEvent* event)
{
//Moving the hovered point to the top left point on screen
m_view_transform *= QTransform::fromTranslate(-event->posF().x(), -event->posF().y());
//Apply transformations accordingly
if((event->modifiers() & Qt::ControlModifier) == Qt::ControlModifier)
m_view_transform *= QTransform().rotate(event->delta() / 30.);
else
{
auto factor = qPow(1.001, event->delta());
m_view_transform *= QTransform::fromScale(factor, factor);
}
//Move the hovered point back to the mouse cursor
m_view_transform *= QTransform::fromTranslate(event->posF().x(), event->posF().y());
emit signalViewTransformChanged(m_view_transform);
update();
}
The code works correctly, but I would like to replace the QTransform().rotate(...) with a QTransform::fromRotate(...)
Why does this method not exist already? I just can't imagine the Qt developers forgot for this one. Is there anything that makes this impossible?
The most probable reason for that is that creating and using a static function to create a rotation transformation forces to always link the mathematical library (for the sin(3) and cos(3) mathematical functions) so instead you can use the specific constructor for that and use them yourself.
By the way, you can use one of the constructors to specify the constants to use for the elements of the matrix. I'ts quite common to conserve the matrices for reuse when using the transformations library.

Drawing "higher-level" object

So I'm using SFML for a Computer Science project - making a chess game. I have a class Square which is a single square of the chessboard - currently, it contains four vertices (four sf::Vertex objects in a member variable sf::VertexArray) and is colored either white or black. A class ChessBoard encapsulates a std::vector of Squares.
Using the tutorial given by SFML, I'm able to draw a single square. However, the draw() function works based on vertices, and since the ChessBoard class doesn't not actually contain vertices, but rather objects that themselves contain vertices, I'm not able to draw the chess board (i.e. its internal draw() function does not work).
Does anyone know how to work around this?
(I can provide more info/clarification/code if necessary/helpful.)
That's not really how "higher level drawing" is supposed to work.
Your parent class(es) shouldn't have to bother how to draw children. You're mixing responsibilities.
Instead, subclass sf::Drawable (and sf::Transformable, if required).
All this does is forcing you to implement a draw() member, which does all the drawing.
Here's a simple example for your ChessBoard class:
class ChessBoard : public sf::Drawable {
void draw (RenderTarget &target, RenderStates states) const {
for (auto &tile : mTiles) // Iterate over all board pieces
target.draw(tile, states); // Draw them
}
}
As you can see, this is trivial to setup. In a similar way, you can overload your Square class. (Isn't that name too generic? Why not simply reusing sf::RectangleShape?)
class ChessBoard : public sf::Drawable {
void draw (RenderTarget &target, RenderStates states) const {
target.draw(mVertices, states);
}
}
So, back to your main game loop. How to draw the ChessBoard? Again, trivial:
while (window.isOpen()) {
// All the other things happening
window.draw(mChessBoard);
}
While the advantages of this approach might not be as obvious at first, it's pretty easy to see that you're capable of passing responsibilities down the line. For example, the ChessBoard doesn't have to know how to properly draw a Square. In a trivial example – using unicolored polygons only – it's not that easy to notice, but your code will be a lot cleaner once you start adding shaders, textures, etc. Suddenly you'd no longer just have to return a sf::VertexArray, but you'll also need pointers or references to the other ressources. So the ChessBoard would have to know, which components to request from Square to draw it properly (Does it have a shader? Do I need a texture?).
Nevermind. Silly me. Implemented a getter inside class Square that returned the vertex array, & inside Chessboard looped through the vector of squares, calling the getter on each iteration.

Can't animate QTransform in qgraphicsItem

I'm trying do some 3D animation in GraphicsScene, for example, to rotate pictures in GraphicsScene (using class, subclassed from qPixmapItem and QObject, if it matters) with Animation framework.
Everything works fine, until i want to rotate pictures around vertical axis.
There is no way doing so via item.rotate(), so i'm using QTranform.
The problem is that doing so does not animate anything at all. What am i doing wrong?
P.S. I do not want use OpenGl for this.
Here is the way i'm doing it. This way works for animating simpler properties like pos, rotation(via rotation, setRotation)
My code :
// hybrid graphicsSceneItem, supporting animation
class ScenePixmap : public QObject, public QGraphicsPixmapItem
{
Q_OBJECT
Q_PROPERTY(QTransform transform READ transform WRITE setTransform)
public:
ScenePixmap(const QPixmap &pixmap, QObject* parent = NULL, QGraphicsItem* parentItem = NULL):
QObject(parent),
QGraphicsPixmapItem(pixmap, parentItem)
{}
};
Here is how I setup scene and animation:
//setup scene
//Unrelated stuff, loading pictures, etc.
scene = new QGraphicsScene(this);
foreach(const QPixmap& image, images)
{
ScenePixmap* item = new ScenePixmap(image);
item->moveBy(70*i, 0);
i++;
this->images.append(item);
scene->addItem(item);
}
}
ui->graphicsView->setBackgroundBrush(QBrush(Qt::black, Qt::SolidPattern));
ui->graphicsView->setScene(scene);
//setup animation
QTransform getTransform()
{
QTransform transform;
transform.rotate(-30, Qt::ZAxis);//also tried transform = transform.rotate(...)
return transform;
}
QAbstractAnimation* SetupRotationAnimation(ScenePixmap* pixmapItem)
{
QPropertyAnimation* animation = new QPropertyAnimation(pixmapItem, "transform");
animation->setDuration(1400);
animation->setStartValue( pixmapItem->transform());
animation->setEndValue(getTransform());//here i tried to multiply with default transform , this does not work either
return animation;
}
here is the way i start animation:
void MainWindow::keyPressEvent ( QKeyEvent * event )
{
if((event->modifiers() & Qt::ControlModifier))
{
QAnimationGroup* groupAnimation = new QParallelAnimationGroup();
foreach(ScenePixmap* image, images)
{
groupAnimation->addAnimation( SetupRotationAnimation(image));
}
groupAnimation->start(QAbstractAnimation::DeleteWhenStopped);
}
}
EDIT[Solved] thx to Darko Maksimovic:
Here is the code that worked out for me:
QGraphicsRotation* getGraphicRotation()
{
QGraphicsRotation* transform = new QGraphicsRotation(this);
transform->setAxis(Qt::YAxis);
return transform;
}
QAbstractAnimation* SetupRotationAnimation(ScenePixmap* pixmapItem)
{
QGraphicsRotation* rotation = getGraphicRotation();
QPropertyAnimation* animation = new QPropertyAnimation(rotation, "angle");
animation->setDuration(1400);
animation->setStartValue( 0);
animation->setEndValue(45);
pixmapItem->setTransformOriginPoint(pixmapItem->boundingRect().center());
QList<QGraphicsTransform*> transfromations = pixmapItem->transformations();
transfromations.append(rotation);
pixmapItem->setTransformations(transfromations);
return animation;
}
I see you use QTransform. If you want only one rotation, and simple rotation that is, it is better that you use setRotation [don't forget about setTransformOriginPoint].
If you, however, want to remember many rotations, around different transform points for example, then you should use QGraphicsTransform, i.e. its specialised derived class, QGraphicsRotation, which you apply by calling setTransformations on a graphics object (you should first fetch the existing transformations by calling o.transformations(), append to this, then call setTransformations; if you keep the pointer to the added transformation you can also change it later directly).
From the code you posted I can't see where your error is coming from, but by using specialised functions you can avoid some of frequent problems.
P.S. I also see you didn't use prepareGeometryChange in the code you posted, so please be advised that this is necessary when transforming objects.
The problem is very simple. QVariantAnimation and QPropertyAnimation don't support QTransform. Heck, they don't even support unsigned integers at the moment. That's all there's to it. Per Qt documentation:
If you need to interpolate other variant types, including custom types, you have to implement interpolation for these yourself. To do this, you can register an interpolator function for a given type. This function takes 3 parameters: the start value, the end value and the current progress.
It'd might not be all that trivial to generate such an interpolator. Remember that you have to "blend" between two matrices, while maintaining the orthonormality of the matrix, and the visual effect. You'd need to decompose the matrix into separate rotation-along-axis, scaling, skew, etc. sub-matrices, and then interpolate each of them separately. No wonder the Qt folks didn't do it. It's probably much easier to do the inverse problem: generate the needed transformation matrix as a function of some parameter t, by composing the necessary rotations, translations, etc., all given as (perhaps constant) functions of t.
Just because a QVariant can carry a type doesn't mean it's supported by the default interpolator. There probably should be a runtime warning issued to this effect.

Move Chipmunk Body to Sprite position

I have a Chipmunk shape, with a body, in a space. I am removing the body from the space so that I can position it and not have it fall due to gravity etc. I need to be able to make this body move, so I am not making it static.
I need the body to update it's position according to the position of a Cocos2D sprite in the scene + an offset.
I'm setting the bodies position with:
collShape->body->p = collSprite.position; - this seems to not work, not compile errors, it runs, but the collision shape doesn't move.
Is it possible to move a collision body based upon the position of a sprite in my tick method?
What you're doing should be possible.
Cleanest way is to create a new class that derives from CCSprite and then override the setPosition method to update the sprite's body.
The advantage of this, is that anytime the sprite's position is changed (either explicitly by you or by any animation sequence) the Chipmunk body will automatically get updated.
-(void) setPosition:(CGPoint) p{
[super setPosition:p];
if (self->body != nil) {
self->body->p.x = p.x;
self->body->p.y = p.y;
//Note: also call cpSpaceRehash to let Chipmunk know about the new position
}
}
When you call cpSpaceStep, a list of active shapes is created and cpShapeUpdateFunc is called for each. That function looks like:
void
cpShapeUpdateFunc(cpShape *shape, void *unused)
{
cpBody *body = shape->body;
cpShapeUpdate(shape, body->p, body->rot);
}
...which updates the shape to the body location and rotation it's attached to. If that's not happening maybe your shape has not been added to the space or has not been added to the body?