SFML sf::View::move inconstancy - c++

UPDATE:
I couldn't figure out the exact problem, however I made a fix that's good enough for me: Whenever the player's X value is less then half the screen's width, I just snap the view back to the center (up left corner) using sf::View::setCenter().
So I'm working on a recreating of Zelda II to help learn SFML good enough so I can make my own game based off of Zelda II. The issue is the screen scrolling, for some reason, if link walks away from the wall and initiated the camera to follow him, and then move back toward the wall, the camera won't go all the way back to the end of the wall, which occurs on the other wall at the end of the scene/room. This can be done multiple times to keep making the said camera block get further away from the wall. This happens on both sides of the scene, and I have reason to believe it has something to do with me trying to make the game frame independent, here's an included GIF of my issue to help understand:
My camera function:
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.move(int(this->player.getVar('v') * this->player.dt * this->player.dtM), 0);
}
}
player.getVar() is a temporary function I'm using to get the players x position and x velocity, using the argument 'x' returns the players x position, and 'v' returns the x velocity. WIDTH is equal to 256, and sceneWidth equals 767, which is the image I'm using for the background's width. dt and dtM are variables for the frame independence I mentioned earlier, this is the deceleration:
sf::Clock sclock;
float dt = 0;
float dtM = 60;
int frame = 0;
void updateTime() {
dt = sclock.restart().asSeconds();
frame += 1 * dt * dtM;
}
updateTime() is called every frame, so dt is updated every frame as well. frame is just a frame counter for Link's animations, and isn't relevant to the question. Everything that moves and is rendered on the screen is multiplied by dt and dtM respectively.

There's a clear mismatch between the movement of the player and the one of the camera... You don't show the code to move the player, but if I guess you don't cast to int the movement there, as you are doing on the view.move call. That wouldn't be a problem if you were setting the absolute position of the camera, but as you are constantly moving it, the little offset accumulates each frame, causing your problem.
One possible solution on is to skip the cast, which is unnecessary because sf::View::move accepts float as arguments.
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.move(this->player.getVar('v') * this->player.dt * this->player.dtM, 0);
}
}
Or even better, not to use view.move but to directly set the position of the camera each frame. Something like:
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.setCenter(this->player.getVar('x'), this->view.getCenter().y);
}
}

Related

How to get unreal engine Touch Index type in C++?

I added touch input for my game's character. The camera moves when a player's finger is dragged across the screen. However, when the joysticks are used to move, the camera doesn't move at the same time.
When I do the same code with Blueprints the code works and the camera still moves. When I drag the touch while pressing the joystick to move, I think that there is a problem in the ETouchIndex::Type. This is because in the BP when I use the finger index given in the touch event, it works. However, when I only use touch 1 as the finger index, it does not work. I think if I put that index in my CPP code it will work too, but where can I find the finger index? Can anyone please help me?
//here's my touch code that executes every tick.
FVector2D TouchLocation;
APlayerController* ActivePlayerController = UGameplayStatics::GetPlayerController(this, 0);
ActivePlayerController->GetInputTouchState(TouchType, TouchLocation.X, TouchLocation.Y, IsTouched);
if (!IsTouched)
{
DidOnce = false;
}
if (IsTouchMoved())
{
if (!DidOnce)
{
ActivePlayerController->GetInputTouchState(TouchType, PrevX, PrevY, IsTouched);
DidOnce = true;
}
ActivePlayerController->GetInputTouchState(TouchType, X, Y, IsTouched);
float FinalRotYaw = (X - PrevX) * UGameplayStatics::GetWorldDeltaSeconds(this) * 20;
float FinalRotPitch = (Y - PrevY) * UGameplayStatics::GetWorldDeltaSeconds(this) * 20;
AddControllerYawInput(FinalRotYaw);
AddControllerPitchInput(FinalRotPitch);
ActivePlayerController->GetInputTouchState(TouchType, PrevX, PrevY, IsTouched);
}

My collision resolution for a top down racing game doesn't quite work

Sorry for the miserable english,
I'm working on the collision resolution of a top down racing game.
The detection of the collision is made out of an image that contains two colors(white being a wall and black being not a wall). I just look at if the player(the car) middle position is inside the white color on the image.
this part works fine.
When it comes to resolution I'm getting some weird bugs.
I'm suppossed to make sure the car stay in the track and cannot get out of it.
My broken solution goes like this:
1. create a vector named "Distance" that contain the substraction of the current position to the previous
2. normalise the Distance vector between 0 and 1
3. substract this normalised Distance vector to the current position until the player isn't on the white color.
4. cancel the velocity of the player(I don't plan on keeping it like that)
it sounds good in my my head but when I apply it I get some segmentation fault and occassionaly some teleportation.
the following code is in written using the SFML library and it's my attempt on making the collision. the player.update() call at the end only moves and rotate car.
void RacingMode::update(const sf::Time& deltaTime)
{
static sf::Vector2f prevPos(0,0);
static sf::Vector2f currPos(0,0);
currPos = sf::Vector2f(player.getPosition().x, player.getPosition().y);
if(raceTrack.getPixelColor(sf::Vector2u(currPos.x, currPos.y)) != sf::Color::Black){
sf::Vector2f dist;
dist.x = currPos.x - prevPos.x;
dist.y = currPos.y - prevPos.y;
dist.x = dist.x / sqrt(dist.x*dist.x + dist.y*dist.y);
dist.y = dist.y / sqrt(dist.x*dist.x + dist.y*dist.y);
sf::Vector2f newPos(0,0);
float i = 0.0;
do{
newPos = currPos - dist*i;
i+= 0.01;
}while(raceTrack.getPixelColor(sf::Vector2u(newPos.x, newPos.y)) != sf::Color::Black);
currPos = newPos;
player.setPosition(newPos);
player.setVelocity(0);
}
prevPos = currPos;
player.update(deltaTime);
}
I'd be grateful to anyone who can point out how I failed my attempt( or can propose other way of solving the problem)

Looping a sprite infinitely in Cocos2D

I'm making a vertical-scroll platformer game, and I want to create sprites that move left-to-right (or right-to-left) and when they're out of the screen, they appear on the other side.
I have an implementation that is mostly working, the only problem is that the sprites on a single floor keeps getting closer and closer in every loop.
I'm really not good in describing things, so please check this video.
I'm using the following code to calculate the new position of the nodes:
pos.x = fmodf(size.width + pos.x + this->currentDir * this->speed * delta, this->len + size.width) - size.width;
len is the width after which the sprite gets repositioned to 0 (actually -size.width, which is the width of the sprite), currentDir is either 1 or -1 and delta is the time from the update() method.
Every sprite is positioned in it's own update(), but I already tried doing everything in the Scene's update() method, but the result was the same.
If your delta variable is increasing over time, then I believe that your pos.x would also increase in the same proportion, this is why the distance from the floors would change.
Have you tried to reset the delta value each time the floor goes offscreen?

Isometric Collision - 'Diamond' shape detection

My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}​
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.

Preserving rotations in OpenGL

I'm drawing an object (say, a cube) in OpenGL that a user can rotate by clicking / dragging the mouse across the window. The cube is drawn like so:
void CubeDrawingArea::redraw()
{
Glib::RefPtr gl_drawable = get_gl_drawable();
gl_drawable->gl_begin(get_gl_context());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
{
glRotated(m_angle, m_rotAxis.x, m_rotAxis.y, m_rotAxis.z);
glCallList(m_cubeID);
}
glPopMatrix();
gl_drawable->swap_buffers();
gl_drawable->gl_end();
}
and rotated with this function:
bool CubeDrawingArea::on_motion_notify_event(GdkEventMotion* motion)
{
if (!m_leftButtonDown)
return true;
_3V cur_pos;
get_trackball_point((int) motion->x, (int) motion->y, cur_pos);
const double dx = cur_pos.x - m_lastTrackPoint.x;
const double dy = cur_pos.y - m_lastTrackPoint.y;
const double dz = cur_pos.z - m_lastTrackPoint.z;
if (dx || dy || dz)
{
// Update angle, axis of rotation, and redraw
m_angle = 90.0 * sqrt((dx * dx) + (dy * dy) + (dz * dz));
// Axis of rotation comes from cross product of last / cur vectors
m_rotAxis.x = (m_lastTrackPoint.y * cur_pos.z) - (m_lastTrackPoint.z * cur_pos.y);
m_rotAxis.y = (m_lastTrackPoint.z * cur_pos.x) - (m_lastTrackPoint.x * cur_pos.z);
m_rotAxis.z = (m_lastTrackPoint.x * cur_pos.y) - (m_lastTrackPoint.y * cur_pos.x);
redraw();
}
return true;
}
There is some GTK+ stuff in there, but it should be pretty obvious what it's for. The get_trackball_point() function projects the window coordinates X Y onto a hemisphere (the virtual "trackball") that is used as a reference point for rotating the object. Anyway, this more or less works, but after I'm done rotating, and I go to rotate again, the cube snaps back to the original position, obviously, since m_angle will be reset back to near 0 the next time I rotate. Is there anyway to avoid this and preserve the rotation?
Yeah, I ran into this problem too.
What you need to do is keep a rotation matrix around that "accumulates" the current state of rotation, and use it in addition to the rotation matrix that comes from the current dragging operation.
Say you have two matrices, lastRotMx and currRotMx. Make them members of CubeDrawingArea if you like.
You haven't shown us this, but I assume that m_lastTrackPoint is initialized whenever the mouse button goes down for dragging. When that happens, copy currRotMx into lastRotMx.
Then in on_motion_notify_event(), after you calculate m_rotAxis and m_angle, create a new rotation matrix draggingRotMx based on m_rotAxis and m_angle; then multiply lastRotMx by draggingRotMx and put the result in currRotMx.
Finally, in redraw(), instead of
glRotated(m_angle, m_rotAxis.x, m_rotAxis.y, m_rotAxis.z);
rotate by currRotMx.
Update: Or instead of all that... I haven't tested this, but I think it would work:
Make cur_pos a class member so it stays around, but it's initialized to zero, as is m_lastTrackPoint.
Then, whenever a new drag motion is started, before you initialize m_lastTrackPoint, let _3V dpos = cur_pos - m_lastTrackPoint (pseudocode).
Finally, when you do initialize m_lastTrackPoint based on the mouse event coords, subtract dpos from it.
That way, your cur_pos will already be offset from m_lastTrackPoint by an amount based on the accumulation of offsets from past arcball drags.
Probably error would accumulate as well, but it should be gradual enough so as not to be noticeable. But I'd want to test it to be sure... composed rotations are tricky enough that I don't trust them without seeing them.
P.S. your username is demotivating. Suggest picking another one.
P.P.S. For those who come later searching for answers to this question, the keywords to search on are "arcball rotation". An definitive article is Ken Shoemake's section in Graphical Gems IV. See also this arcball tutorial for JOGL.