SFML Views setCenter vs rotation - c++

I have a view with same dimensions of original window (500,300)
I apply view.zoom(2) to leave the view at half the size.
Now the view is centered. I want to move the view to the upper left corner of the original window. So I put view.setCenter(500,300);
The view is now correctly positioned in the upper corner of the original window. But now I want to rotate the view, making the center of the view its own top left corner, ie (0,0): view.setRotation(5);
As you can see, the center of the axis of rotation should be 0.0 but not respected.
The problem is that if I do view.setCenter (0,0), the whole view returns to the middle of the original window.
How to solve this?

Instead of using view.setCenter(500,300); move it via view.move(x_offset, y_offset);. Then applying setCenter(...) won't redefine the center and it won't get reset.
I recommend consulting the API reference of View for further reading.
You might also be interested in void sf::View::setViewport(const FloatRect& viewport) or void sf::View::reset(const FloatRect& rectangle).

This code, kindly provided by Geheim, solves the problem and also teaches a more practical approach to SFML.
#include <SFML/Graphics.hpp>
int main()
{
sf::RenderWindow window({500, 300}, "SFML Views", sf::Style::Close);
window.setFramerateLimit(120);
bool useViewPort {true}; // change this to false to see the other version
sf::View camera {{0, 0}, static_cast<sf::Vector2f>(window.getSize())};
if (useViewPort)
{
camera.setViewport({-0.5f, -0.5f, 1, 1});
camera.rotate(5);
}
else
camera.setCenter(camera.getSize());
camera.zoom(2);
window.setView(camera);
sf::RectangleShape background {camera.getSize()};
sf::RectangleShape square {{50, 50}};
square.setFillColor(sf::Color::Red);
sf::RenderTexture texture;
texture.create(window.getSize().x, window.getSize().y);
while (window.isOpen())
{
for (sf::Event event; window.pollEvent(event);)
if (event.type == sf::Event::Closed)
window.close();
window.clear();
if (useViewPort)
{
window.draw(background);
window.draw(square);
}
else
{
texture.clear();
texture.draw(background);
texture.draw(square);
texture.display();
sf::Sprite content {texture.getTexture()};
content.rotate(-5); // you have to rotate in the other disquareion here, do you know why?
window.draw(content);
}
window.display();
}
return EXIT_SUCCESS;
}

I'm glad you got the result you wanted by applying viewports using Geheim's code.
However, if you don't want to be using viewports to clip areas of the window and such, you can still rotate a view around a specific point other than its centre. You just need a little bit of mathematics...
Take the different between the target point (in the view's co-ordinate system) and the view's centre and rotate that point by the amount you wish to rotate the view and around the view's centre. Then, calculate the difference between those points (the target point and the rotated point). Once you have this difference, simply rotate the view (around its centre as normal) but then move the view by that difference.
It might sound complicated so you might want to just use this free function that I made that does it all automatically; it's on the SFML Wiki:
RotateViewAt

Related

Why is the screen space coordinate system for my sfml rendered app inverted?

I am learning C++ and I thought I'd make the original asteroids game with a fresh coat of paint using the SFML graphics library. However, for my player sprite, while the origin is at the top left corner of the screen, to the right of it is the negative x axis and downwards is negative y axis (opposite of what it's supposed to be in both cases). Also, no matter what object or rotation, invoking the setRotation function always rotates any object about the top left corner of the screen even if, for that object, I have set the origin to the object's center.
#include<SFML\Graphics.hpp>
using namespace sf;
const int W{ 1200 }, H{ 800 };
const float degToRad = 0.017453f;
int main() {
float x{ -600 }, y{ -400 };
float dx{}, dy{}, angle{};
bool thrust;
RenderWindow app(VideoMode(W, H), "Asteroids!");
app.setFramerateLimit(60);
Texture t1, t2;
t1.loadFromFile("images/spaceship.png");
t2.loadFromFile("images/background.jpg");
Sprite sPlayer(t1), sBackground(t2);
sPlayer.setTextureRect(IntRect(40, 0, 40, 40));
sPlayer.setOrigin(-600, -400);
while (app.isOpen())
{
app.clear();
app.draw(sPlayer);
app.display();
}
return 0;
}
The above code draws the player (spaceship.png) to the center of my rendered window (app) but notice how I have had to put in negative coordinates. Also, if I further put in the code for taking keyboard inputs and call the setRotation function, instead of rotating my sPlayer sprite about its center (i.e. (-600,-400)), it rotates the sprite about the top left corner of the screen which is (0,0). I can't find any explanation for this in the SFML online documentation. What should I do?
As I mentioned I have tried reading the documentation. I've watched online tutorials but to no avail.
Origin is the point on sprite where you "hold" it.
Position is the point on screen where you put Origin of the sprite.
In short, you take your sprite by Origin and put it so Origin is on Position.
By default, both Origin and Position are (0, 0), so top left of your sprite is put at top left of the screen. What you did was to say "take this point on sprite, which is way to the upper-left that actual visible part of the sprite is and put it to the top left of the screen". This had an effect of moving your sprite to the bottom right.
You probably want something like:
// This is will make sure that Origin, i.e. point which defines rotation and other transformations center is at center of the ship
sPlayer.setOrigin(sprite_width / 2, sprite_height / 2);
// This will put Origin (center of the ship) at center of the screen
sPlayer.setPosition(screen_width / 2, screen_height / 2);

Set object position relative to other object

So I try to learn about game development, and I want to make my character can move its elbow. the elbow looks like this, it consist of 2 sprites, arm1 and arm2. Arm1 can rotate base on its origin, and arm2 should locate at the tip of arm1(about 60 px from arm1 origin). But I don't know how to put arm2 at the correct position like in the img. I try to use polar coordinate because I know the angle and arm distance
lines[0].position=Vector2f(arm1.getPosition().x,arm1.getPosition().y);
lines[0].color=Color::Blue;
armPos.x=arm1.getPosition().x+(d*cos(AngleToRad(arm1.getRotation()-toleransi) ));
armPos.y=arm1.getPosition().y+(d*sin(AngleToRad(arm1.getRotation()-toleransi)));
lines[1].position=armPos;
lines[1].color=Color::Blue;
cir.setPosition(armPos);
arm1.setPosition(mc.getPosition().x+10,mc.getPosition().y-50);
arm2.setPosition(arm1.getPosition().x,mc.getPosition().y-10);
, but that doesn't work. I use circle and line just for debug.
The full code looks like this:
#include <SFML/Graphics.hpp>
#include <math.h>
#include <iostream>
#include <vector>
#include "Player.h"
#include "Particle.h"
using namespace sf;
float AngleToRad(float a)
{
return (a/360.0f)*3.14159265359;
}
int main()
{
RenderWindow window(VideoMode(1000,640), "Small Life");
//////////////Setup////////////
//mc//
Texture idle_texture;
idle_texture.loadFromFile("image/idle.png");
IntRect player_rect(264,0,264,264);
Sprite mc(idle_texture,player_rect);
mc.setOrigin(132,264);
Player player(&idle_texture, Vector2u(4,1),0.3f);
mc.setPosition(0,300);
mc.setScale(0.7,0.7);
//arm//
Texture arm1_texture;
arm1_texture.loadFromFile("image/arm1.png");
Sprite arm1(arm1_texture);
Texture arm2_texture;
arm2_texture.loadFromFile("image/arm2.png");
Sprite arm2(arm2_texture);
arm1.setOrigin(70,158);
arm2.setOrigin(79,158);
arm1.setScale(0.5,0.5);
arm2.setScale(0.7,0.7);
//blood//
Texture blood_texture;
blood_texture.loadFromFile("image/blood.png");
CircleShape cir(10);
cir.setOrigin(5,5);
VertexArray lines(LinesStrip,2);
cir.setFillColor(Color::Red);
float deltaTime=0.0f;
Clock clock;
Clock particle_time;
float speed=0.2f;
std::vector<Sprite>bloodVec;
std::cout<<sin(1.5708)<<" "<<cos(AngleToRad(180))<<" "<< AngleToRad(180)<<" "<<" "<<asin(1)<<" "<<acos(1)<<std::endl;
while (window.isOpen())
{
Event event;
deltaTime=clock.restart().asSeconds();
while (window.pollEvent(event))
{
if (event.type == Event::Closed)
window.close();
}
if(Keyboard::isKeyPressed(Keyboard::W)) mc.move(0,-speed);
if(Keyboard::isKeyPressed(Keyboard::S)) mc.move(0,speed);
if(Keyboard::isKeyPressed(Keyboard::A)) mc.move(-speed,0);
if(Keyboard::isKeyPressed(Keyboard::D)) mc.move(speed,0);
//blood particle//
if(particle_time.getElapsedTime().asSeconds()>1.5f)
{
IntRect blRect(0,0,200,200);
Sprite b_blood(blood_texture,blRect);
ParticleConstDrop(b_blood,mc.getPosition());
bloodVec.push_back(b_blood);
particle_time.restart();
}
int bloodCount=bloodVec.size();
for(int i=0;i<bloodCount;i++)
{
window.draw(bloodVec[i]);
}
Vector2f armPos(arm1.getPosition());
float d=30.0f;
float toleransi=90;
lines[0].position=Vector2f(arm1.getPosition().x,arm1.getPosition().y);
lines[0].color=Color::Blue;
armPos.x=arm1.getPosition().x+(d*cos(AngleToRad(arm1.getRotation()-toleransi) ));
armPos.y=arm1.getPosition().y+(d*sin(AngleToRad(arm1.getRotation()-toleransi)));
lines[1].position=armPos;
lines[1].color=Color::Blue;
cir.setPosition(armPos);
arm1.setPosition(mc.getPosition().x+10,mc.getPosition().y-50);
arm2.setPosition(arm1.getPosition().x,mc.getPosition().y-10);
arm1.setRotation(110);//arm1.getRotation()+0.1
player.Update(0,deltaTime);
mc.setTextureRect(player.plRect);
window.draw(arm1);
//window.draw(arm2);
window.draw(mc);
window.draw(cir);
window.draw(lines);
window.display();
window.clear(Color(255,255,255));
}
return 0;
}
Can anyone please tell me what wrong with my code, or is there another way to implement this?
Relative positions are achieved by transform composition (matrix multiplication). You can try to do it manually, but SFML already implements it, and even better: it uses it behind sf::Sprite::draw.
So let's see: arm2 must have a position relative to arm1, so how do we do that?
Set the origin of arm2 where the elbow joint is in arm2 local coordinates.
Set the position of arm2 where the elbow joint is in arm1 local coordinates.
Pass the arm1 transform to the sf::RenderStates each time you draw arm2. The transform multiplication will be performed underneath.
// Do this once
arm2.setOrigin(elbow_x_in_arm2, elbow_y_in_arm2);
arm2.setPosition(elbow_x_in_arm1, elbow_y_in_arm1);
// But this, each time you draw them
window.draw(arm1);
window.draw(arm2, sf::RenderStates(arm1.getTransform()));
Result:
Whenever you move, rotate or scale the arm1, the arm2 will remain attached. Also, if you rotate arm2, it will rotate around the elbow.
Important!
The transform of arm2 will represent the local transformation, so even though it's drawn in the correct position, the data does not contain the global position/rotation/scale. If you wanted to , for example, check for collisions in arm2, you should take this into account:
// don't use this to get the bounding box
sf::FloatRect boundingBoxBad = arm2.getGlobalBounds(); // WRONG: now they are not global
// use this:
sf::Transform tr1 = arm1.getTransform();
sf::Transform tr2 = arm2.getTransform();
sf::FloatRect boundingbBoxGood = tr2.transformRect(tr1.transformRect(arm2.getLocalBounds()));

How to know a sprite position inside a view, relative to window?

I have this sprite of a car that moves with varied speed.
It is inside a view and the view is moved to the left to keep the car always in the center of the window.
The view accompanies the displacement of the car, ie it is shifted to the left as the car accelerates or brakes.
This way the car will always appear in the center.
But if for example it is overtaken by another car, it will be left behind.
For it not to disappear from the window, I have to zoom in the view so that all the cars appear.
But for this, I need to know the position of the car in relation to the window (not in relation to the view).
getGlobalBounds().left or getPosition().x show the same value, which is the position relative to the view, not relative to the window, as shown in the image.
How to know a sprite position inside a view, relative to window?
After several hours of research, I finally find the easy way of achieve this. And yes, it was ridiculously easy.
But first, I would like to clear up some misconceptions.
getGlobalBounds().left or getPosition().x show the same value,
which is the position relative to the view, not relative to the
window, as shown in the image.
In fact, those methods return the position in the world, not in the view nor in the window.
You can have, for instance, a 500x500 window, with a 400x400 view, in a 10000x10000 world. You can place things in the world, outside of the view or the window. When the world is rendered, then the transformations of the view (translations, rotations, zoom, ...) are applied to the world and things are finally shown in the window.
To know where a coordinate in the world is represented in the window (or any other RenderTarget) and vice versa, SFML actually have a couple of functions:
RenderTarget.mapCoordsToPixel(Vector2f point)
Given a point in the world gives you the corresponding point in the RenderTarget.
RenderTarget.mapPixelToCoords(Vector2f point)
Given a point in the RenderTarget gives you the corresponding point in the world. (this is useful to map mouse clicks to corresponding points in your world)
Result
Code
int main()
{
RenderWindow window({ 500, 500 }, "SFML Views", Style::Close);
sf::View camera(sf::FloatRect(0, 0, window.getSize().x, window.getSize().y));
sf::Vector2f orig(window.getSize().x / 2, window.getSize().y / 2);
camera.setCenter(orig);
sf::Font f;
f.loadFromFile("C:/Windows/Fonts/Arial.ttf");
sf::Text t;
t.setFont(f);
sf::RectangleShape r;
r.setPosition(10, 10);
r.setSize(sf::Vector2f(20, 20));
r.setOutlineColor(sf::Color::Blue);
r.setFillColor(sf::Color::Blue);
t.setPosition(10, 40);
while (window.isOpen())
{
for (Event event; window.pollEvent(event);)
if (event.type == Event::Closed)
window.close();
else if (event.type == Event::KeyPressed){
camera.move(-3, 0);
camera.rotate(5.0);
camera.zoom(1.1);
}
auto realPos = window.mapCoordsToPixel(r.getPosition());
std::string str = "Pos: (" + std::to_string(realPos.x) +","+ std::to_string(realPos.y) + ")";
t.setString(str);
window.clear();
window.setView(camera);
window.draw(r);
window.draw(t);
window.display();
}
return EXIT_SUCCESS;
}

Why the texture appears only in the first quadrant

What's wrong with this code using SFML?
In the code below, I have this image (1000x1000) and I want to show it in a window (500x500) using sf::RenderTexture.
However, only part of the image appears in the first quadrant:
#include <SFML/Graphics.hpp>
using namespace sf;
int main()
{
RenderWindow window({500, 500}, "SFML Views", Style::Close);
View camera;
camera.setSize(Vector2f(window.getSize()));
Texture background;
background.loadFromFile("numeros.png");
Sprite numeros (background);
RenderTexture texture;
texture.create(window.getSize().x, window.getSize().y);
Sprite content;
content.setTexture(texture.getTexture());
texture.draw(numeros);
texture.display();
while (window.isOpen())
{
for (Event event; window.pollEvent(event);)
if (event.type == Event::Closed)
window.close();
window.clear();
window.setView(camera);
window.draw(content);
window.display();
}
return EXIT_SUCCESS;
}
As far as I can understand, the code should generate the original image (1000x1000) automatically adjusted to 500x500.
Could anyone tell you what is wrong?
You're facing, in fact, two distinct problems:
First one:
As far as I can understand, the code should generate the original
image (1000x1000) automatically adjusted to 500x500.
This is not really true. SFML handles the sprites with the real size of the texture. If your image is 1000x1000, but you want representing it as 500x500, you should assign the texture to a sprite, as you do:
Sprite numeros(background);
and then scale this sprite to fit in a 500x500 window, this is:
numeros.setScale(0.5, 0.5);
With this change you should view the whole image, but...
Second one:
You're messing with the view of the window. If we check SFML documentation, we can see that sf::View expects:
A sf::FloatRect: this is, a coordinate (x,y) - in this case the top-left corner - and a size(width, height)
or
Two sf::Vector2f: one corresponding to the coordinates of the center and the other corresponding to the size of the view.
Assuming you want to use the second one, you're missing the first parameter, the center coordinates, but this is not really necessary. If you simply don't apply the view, the image should be shown in the whole window.
So you simply need to remove:
window.setView(camera);
The code I've tried:
int main()
{
RenderWindow window({ 500, 500 }, "SFML Views", Style::Close);
View camera;
camera.setSize(Vector2f(window.getSize()));
Texture background;
background.loadFromFile("numeros.png");
Sprite numeros(background);
numeros.setScale(0.5, 0.5); // <-- Add this
RenderTexture texture;
texture.create(window.getSize().x, window.getSize().y);
Sprite content;
content.setTexture(texture.getTexture());
texture.draw(numeros);
texture.display();
while (window.isOpen())
{
for (Event event; window.pollEvent(event);)
if (event.type == Event::Closed)
window.close();
window.clear();
//window.setView(camera); <-- Remove this
window.draw(content);
window.display();
}
return EXIT_SUCCESS;
}
And my result:
Just to add another option to #alseether 's excellent response, I realized that the whole issue consisted of that bad View initialization.
This way you can also set the size of the view = to the size of the background image (1000,1000) and finally set the center of the view to the windows's upper left corner.
As the view is larger than the window size (500,500) it will automatically be adjusted to this new size.
In short, the section to be changed would be:
View camera;
camera.setSize(Vector2f(background.getSize().x, background.getSize().y));
camera.setCenter(Vector2f(window.getSize()));

Mirroring the Y axis in SFML

Hey so I'm integrating box2d and SFML, and box2D has the same odd, mirrored Y-axis coordinate system as SFML, meaning everything is rendered upside down. Is there some kind of function or short amount of code I can put that simply mirrors the window's render contents?
I'm thinking I can put something in sf::view to help with this...
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
I don't know what is box2d but when I wanted to flip Y axis using openGL, I just applied negative scaling factor to projection matrix, like:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, -1.0f, 1.0f);
If you want to do it independent of openGL simply apply a sf::View with a negative x value.
It sounds like your model uses a conventional coordinate system (positive y points up), and you need to translate that to the screen coordinate system (positive y points down).
When copying model/Box2D position data to any sf::Drawable, manually transform between the model and screen coordinate systems:
b2Vec2 position = body->GetPosition();
sprite.SetPosition( position.x, window.GetHeight() - position.y )
You can hide this in a wrapper class or function, but it needs to sit between the model and renderer as a pre-render transform. I don't see a place to set that in SFML.
I think Box2D has the coordinate system you want; just set the gravity vector based on your model (0, -10) instead of the screen.
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
By properly applying transforms. First, you can apply a transform that sets the window's bottom-left corner as the origin. Then, scale the Y axis by a factor of -1 to flip it as the second transform.
For this, you can use sf::Transformable to specify each transformation individually (i.e., the setting of the origin and the scaling) and then – by calling sf::Transformable::getTransform() – obtain an sf::Transform object that corresponds to the composed transform.
Finally, when rendering the corresponding object, pass this transform object to the sf::RenderTarget::draw() member function as its second argument. An sf::Transform object implicitly converts to a sf::RenderStates which is the second parameter type of the corresponding sf::RenderTarget::draw() overload.
As an example:
#include <SFML/Graphics.hpp>
auto main() -> int {
auto const width = 300, height = 300;
sf::RenderWindow win(sf::VideoMode(width, height), "Transformation");
win.setFramerateLimit(60);
// create the composed transform object
const sf::Transform transform = [height]{
sf::Transformable transformation;
transformation.setOrigin(0, height); // 1st transform
transformation.setScale(1.f, -1.f); // 2nd transform
return transformation.getTransform();
}();
sf::RectangleShape rect({30, 30});
while (win.isOpen()) {
sf::Event event;
while (win.pollEvent(event))
if (event.type == sf::Event::Closed)
win.close();
// update rectangle's position
rect.move(0, 1);
win.clear();
rect.setFillColor(sf::Color::Blue);
win.draw(rect); // no transformation applied
rect.setFillColor(sf::Color::Red);
win.draw(rect, transform); // transformation applied
win.display();
}
}
There is a single sf::RectangleShape object that is rendered twice with different colors:
Blue: no transform was applied.
Red: the composed transform was applied.
They move in opposite directions as a result of flipping the Y axis.
Note that the object space position coordinates remain the same. Both rendered rectangles correspond to the same object, i.e., there is just a single sf::RectangleShape object, rect – only the color is changed. The object space position is rect.getPosition().
What is different for these two rendered rectangles is the coordinate reference system. Therefore, the absolute space position coordinates of these two rendered rectangles also differ.
You can use this approach in a scene tree. In such a tree, the transforms are applied in a top-down manner from the parents to their children, starting from the root. The net effect is that children's coordinates are relative to their parent's absolute position.