I have a board where i calculate everything with millimeters and at drawout i calculate back to pixels.
I also have a mouse position in pixels, and a grid, where i want to align my mouse.
Functions calculate between pixels and millimeters:
int global::mmToPixel(double value)
{
return global::dpi / (double)25.4 * value;
}
double global::pixelToMm(int value)
{
return ((double)value) / (global::dpi / (double)25.4);
}
My values:
global::dpi = 600.0;
global::gridSize = 2.54;
And the mouse position function:
QPointD canvas::mousePosition(bool tiled)
{
QPoint mp = this->mapFromGlobal(QCursor::pos());
mp.setX(mp.x() - mp.x()%global::mmToPixel(global::gridSize));
mp.setY(mp.y() - mp.y()%global::mmToPixel(global::gridSize));
QPointD p(global::pixelToMm(mp.x()), global::pixelToMm(mp.y()));
return p;
}
Dpi is the dots/inch. gridSize is how big a tile is. mp is contain the mouse coordinates and p is the aligned coordinates.
I tried a lot of versions(calculating with pixels or millimeters) but either of those worked. Sometimes when the mouse is on the edge of the tile it's flickering, and jump 1 or to tiles in a direction(random where).
If i remove the 2 aligner lines it's works perfectly. When i wrote it in c# it worked, but in c++ doesn't now.
QPoint contains 2 int and QPointD is a custom class it contains 2 double value;
Any ideas?
Sorry for my poor english.
Related
UPDATE:
I couldn't figure out the exact problem, however I made a fix that's good enough for me: Whenever the player's X value is less then half the screen's width, I just snap the view back to the center (up left corner) using sf::View::setCenter().
So I'm working on a recreating of Zelda II to help learn SFML good enough so I can make my own game based off of Zelda II. The issue is the screen scrolling, for some reason, if link walks away from the wall and initiated the camera to follow him, and then move back toward the wall, the camera won't go all the way back to the end of the wall, which occurs on the other wall at the end of the scene/room. This can be done multiple times to keep making the said camera block get further away from the wall. This happens on both sides of the scene, and I have reason to believe it has something to do with me trying to make the game frame independent, here's an included GIF of my issue to help understand:
My camera function:
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.move(int(this->player.getVar('v') * this->player.dt * this->player.dtM), 0);
}
}
player.getVar() is a temporary function I'm using to get the players x position and x velocity, using the argument 'x' returns the players x position, and 'v' returns the x velocity. WIDTH is equal to 256, and sceneWidth equals 767, which is the image I'm using for the background's width. dt and dtM are variables for the frame independence I mentioned earlier, this is the deceleration:
sf::Clock sclock;
float dt = 0;
float dtM = 60;
int frame = 0;
void updateTime() {
dt = sclock.restart().asSeconds();
frame += 1 * dt * dtM;
}
updateTime() is called every frame, so dt is updated every frame as well. frame is just a frame counter for Link's animations, and isn't relevant to the question. Everything that moves and is rendered on the screen is multiplied by dt and dtM respectively.
There's a clear mismatch between the movement of the player and the one of the camera... You don't show the code to move the player, but if I guess you don't cast to int the movement there, as you are doing on the view.move call. That wouldn't be a problem if you were setting the absolute position of the camera, but as you are constantly moving it, the little offset accumulates each frame, causing your problem.
One possible solution on is to skip the cast, which is unnecessary because sf::View::move accepts float as arguments.
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.move(this->player.getVar('v') * this->player.dt * this->player.dtM, 0);
}
}
Or even better, not to use view.move but to directly set the position of the camera each frame. Something like:
void Game::camera() {
if (this->player.getVar('x') >= this->WIDTH / 2 and this->player.getVar('x') < this->sceneWidth - this->WIDTH / 2) {
this->view.setCenter(this->player.getVar('x'), this->view.getCenter().y);
}
}
I'm trying to draw a 3-pixel large point with QPainter. But the following code instead draws a horizontal line with width of 3 pixels.
#include <QPainter>
#include <QImage>
int main()
{
const int w=1000, h=1000;
QImage img(w, h, QImage::Format_RGBX8888);
{
QPainter p(&img);
p.fillRect(0,0,w,h,Qt::black);
p.scale(w,h);
p.setPen(QPen(Qt::red, 3./w, Qt::SolidLine, Qt::RoundCap));
p.drawPoint(QPointF(0.1,0.1));
}
img.save("test.png");
}
Here's the top left corner of the resulting image:
I am expecting to get a point which is red circle, or at least a square — but instead I get this line segment. If I comment out p.scale(w,h) and draw the point with width 3 (instead of 3./w) at position (100,100), then I get a small mostly symmetric 3-pixel in height and width point.
What's going on? Why do I get a line segment instead of point as expected? And how to fix it, without resorting to drawing an ellipse or to avoiding QPainter::scale?
I'm using Qt 5.10.0 on Linux x86 with g++ 5.5.0. The same happens on Qt 5.5.1.
It appears that QPaintEngineEx::drawPoints renders points as line segments of length 1/63.. See the following code from qtbase/src/gui/painting/qpaintengineex.cpp in the Qt sources:
void QPaintEngineEx::drawPoints(const QPointF *points, int pointCount)
{
QPen pen = state()->pen;
if (pen.capStyle() == Qt::FlatCap)
pen.setCapStyle(Qt::SquareCap);
if (pen.brush().isOpaque()) {
while (pointCount > 0) {
int count = qMin(pointCount, 16);
qreal pts[64];
int oset = -1;
for (int i=0; i<count; ++i) {
pts[++oset] = points[i].x();
pts[++oset] = points[i].y();
pts[++oset] = points[i].x() + 1/63.;
pts[++oset] = points[i].y();
}
QVectorPath path(pts, count * 2, qpaintengineex_line_types_16, QVectorPath::LinesHint);
stroke(path, pen);
pointCount -= 16;
points += 16;
}
} else {
for (int i=0; i<pointCount; ++i) {
qreal pts[] = { points[i].x(), points[i].y(), points[i].x() + qreal(1/63.), points[i].y() };
QVectorPath path(pts, 2, 0);
stroke(path, pen);
}
}
}
Notice the pts[++oset] = points[i].x() + 1/63.; line in the opaque brush branch. This is the second vertex of the path — shifted with respect to the desired position of the point.
This explains why the line extends to the right of the position requested and why it depends on the scale. So, it seems the code in the OP isn't wrong for an ideal QPainter implementation, but just has come across a Qt bug (be it in the implementation of the method or in its documentation).
So the conclusion: one has to work around this problem by either using different scale, or drawing ellipses, or drawing line segments with much smaller lengths than what QPainter::drawPoints does.
I've reported this as QTBUG-70409.
Though I have not been able to pin point why exactly the problem occurs, I have got relatively close to the solution. The problem lies with scaling. I did a lots of trial and error with different scaling and width of point with the below code.
const int w=500, h=500;
const int scale = 100;
float xPos = 250;
float yPos = 250;
float widthF = 5;
QImage img(w, h, QImage::Format_RGBX8888);
{
QPainter p(&img);
p.setRenderHints(QPainter::Antialiasing);
p.fillRect(0,0,w,h,Qt::black);
p.scale(scale, scale);
p.setPen(QPen(Qt::red, widthF/(scale), Qt::SolidLine, Qt::RoundCap));
p.drawPoint(QPointF(xPos/scale, yPos/scale));
}
img.save("test.png");
The above code produces the image
My observations are
1) Due to high scaling, the point (which is just 3 pixel wide) is not able to scale proportionally at lower width (width of point), if you set width as something like 30 the round shape is visible.
2) If you want to keep the width of point low then you have to decrease the scaling.
Sadly I can not explain why at high scaling it is not expanding proportionally.
I'm making a shooter where the player can shoot at the mouse, so I got the guns to point at the mouse, but I can't figure out how to correctly flip the image once it's turned left. I mean, I did flip the image but it's not aligned right. I'm not really sure how to explain it, here's my code.
void PortalGun::update(Vector2i mPos) {
float pi = 3.14159265359;
float rotation = atan2(sprite.getGlobalBounds().top - mPos.y,sprite.getGlobalBounds().left - mPos.x) * 180 / pi;
int x = player->sprite.getGlobalBounds().left + 16;
int y = player->sprite.getGlobalBounds().top;
if (rotation > -90 && rotation < 90) {
player->dir = -1;
sprite.setTextureRect(IntRect(0, 32, 64, -32));
} else {
player->dir = 1;
sprite.setTextureRect(IntRect(0, 0, 64, 32));
}
sprite.setPosition(x, y + 15);
sprite.setRotation(rotation + 170);
}
When the mouse is to the left of the gun, it flips the image but keeps rotating upwards so the mouse is 20 ish pixels higher. I can't just change the position when rotating, so what do I do? Sorry for sounding a bit cryptic, it's a bit hard to explain.
First of all, you should set your sprite's origin to the point where you'd like to rotate the gun (typically the handle or mounting point). To do this, use sf::Sprite::setOrigin(). The passed coordinates are relative to the top left corner of the sprite.
To get or set your sprite's position in the world (where your origin is), you can use sf::Sprite::getPosition() and sf::Sprite::setPosition().
Your rotation can stay as it is. It will rotate around your set origin.
To mirror your sprite, just scale (sf::Sprite::setScale()) it using a negative factor: sprite.setScale(-1, 1); The negative factor will mirror/flip the image at the set origin, without forcing you to update the texture coordinates.
My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.
I am attempting to display the plot values of different points on my QCustomPlot in which I have a Line style of lsLine. I know i could set a mouse over signal on the QCustomPlot but that wont really help since I just need to be informed when the mouse is over my plotted line.My question is is there any way to find out if the mouse is over my scatter point. Is there a signal i could connect to that would tell me when the mouse is over a scatter point ?
Reimplement QCustomPlot::mouseMoveEvent or connect to QCustomPlot::mouseMove.
Then use axes' coordToPixel to translate (cursor) pixel coords to plot coords and search nearest points in your QCPDataMap with QMap::lowerBound(cursorX).
You can easily just connect a slot to the mouseMove signal that QCustomPlot emits. You can then use QCPAxis::pixelToCoord to find the coordinate :
connect(this, SIGNAL(mouseMove(QMouseEvent*)), this,SLOT(showPointToolTip(QMouseEvent*)));
void QCustomPlot::showPointToolTip(QMouseEvent *event)
{
int x = this->xAxis->pixelToCoord(event->pos().x());
int y = this->yAxis->pixelToCoord(event->pos().y());
setToolTip(QString("%1 , %2").arg(x).arg(y));
}
when You use datetime format (including more point per second) of X axis, then pixel to coord fails.
If you want to display coordinates between points, then this is the fastest way
maybe usefull (with connected signal QCustomplot::MouseMove)
void MainWindow::onMouseMoveGraph(QMouseEvent* evt)
{
int x = this->ui->customPlot->xAxis->pixelToCoord(evt->pos().x());
int y = this->ui->customPlot->yAxis->pixelToCoord(evt->pos().y());
qDebug()<<"pixelToCoord: "<<data.key<<data.value; //this is correct when step is greater 1 second
if (this->ui->customPlot->selectedGraphs().count()>0)
{
QCPGraph* graph = this->ui->customPlot->selectedGraphs().first();
QCPData data = graph->data()->lowerBound(x).value();
double dbottom = graph->valueAxis()->range().lower; //Yaxis bottom value
double dtop = graph->valueAxis()->range().upper; //Yaxis top value
long ptop = graph->valueAxis()->axisRect()->top(); //graph top margin
long pbottom = graph->valueAxis()->axisRect()->bottom(); //graph bottom position
// result for Y axis
double valueY = (evt->pos().y() - ptop) / (double)(pbottom - ptop)*(double)(dbottom - dtop) + dtop;
//or shortly for X-axis
double valueX = (evt->pos().x() - graph->keyAxis()->axisRect()->left()); //graph width in pixels
double ratio = (double)(graph->keyAxis()->axisRect()->right() - graph->keyAxis()->axisRect()->left()) / (double)(graph->keyAxis()->range().lower - graph->keyAxis()->range().upper); //ratio px->graph width
//and result for X-axis
valueX=-valueX / ratio + graph->keyAxis()->range().lower;
qDebug()<<"calculated:"<<valueX<<valueY;
}
}