Glitchy grid in QGraphicsView::drawBackground - c++

I'm trying to properly display a grid pattern on QGraphicsView::drawBackground. Everything seems to work fine until I try to move an item added to a scene.
I add the line in MainWindow like this:
QPen _Pen;
_Pen.setColor(Qt::red);
_Pen.setWidth(3);
QGraphicsLineItem* _Line=new QGraphicsLineItem(0,0,100,100);
_Line->setPen(_Pen);
_Line->setVisible(true);
_Line->setFlags(QGraphicsLineItem::ItemIsSelectable | QGraphicsLineItem::ItemIsMovable);
m_scene->addItem(_Line);
Methods of GraphicsView:
GraphicsView::GraphicsView() : cellSize(20)
{
setSizePolicy(QSizePolicy::Expanding,QSizePolicy::Expanding);
}
void GraphicsView::drawBackground(QPainter *p, const QRectF &crect)
{
p->save();
p->setPen(QPen(Qt::black,1));
for (int x = crect.topLeft().x(); x < crect.bottomRight().x(); x += cellSize)
for (int y = crect.topLeft().y(); y < crect.bottomRight().y(); y += cellSize)
p->drawPoint(x, y);
p->restore();
}
The problem can be seen here:
When I move the item, it leaves a trail of grid dots behind it, which are not aligned to the original grid.
I don't understand where this error comes from. Have I done something wrong?

I can think of 2 possible issues:
Change the view update mode. The options are described here.
setViewportUpdateMode(SmartViewportUpdate); or setViewportUpdateMode(FullViewportUpdate);
The smear may be caused by the item bounding rectangle of the item being half a pixel too small.
(I realize this question is old, but maybe it will help someone else)

The region you're given in drawBackground is not always the full view. When something on the scene changes, you're given only the relevant region to redraw. And you're starting in the top left corner of different "moving" regions.
One way to fix this would be to make the initial x and y multiples of cellSize:
for (int x = (int)crect.left()/cellSize*cellSize; x < crect.right(); x += cellSize)
for (int y = (int)crect.top()/cellSize*cellSize; y < crect.bottom(); y += cellSize)

Related

Rogue line being drawn to window

I am making a graphing program in C++ using the SFML library. So far I have been able to draw a function to the screen. I have run into two problems along the way.
The first is a line which seems to return to the origin of my the plane, starting from the end of my function.
You can see it in this image:
As you can see this "rogue" line seems to change colour as it nears the origin. My first question is what is this line and how may I eradicate it from my window?
The second problem which is slightly unrelated and more mathematical can be seen in this image:
As you can see the asymptotes which are points where the graph is undefined or non continuous are being drawn. This leads me to my second question: is there a way ( in code ) to identify an asymptote and not draw it to the window.
My code for anything drawn to the window is:
VertexArray axis(Lines, 4);
VertexArray curve(PrimitiveType::LinesStrip, 1000);
axis[0].position = Vector2f(100000, 0);
axis[1].position = Vector2f(-100000, 0);
axis[2].position = Vector2f(0, -100000);
axis[3].position = Vector2f(0, 100000);
float x;
for (x = -pi; x < pi; x += .0005f)
{
curve.append(Vertex(Vector2f(x, -tan(x)), Color::Green));
}
I would very much appreciate any input : )
Update:
Thanks to the input of numerous people this code seems to work fine in fixing the asymptote problem:
for (x = -30*pi; x < 30*pi; x += .0005f)
{
x0 = x1; y0 = y1;
x1 = x; y1 = -1/sin(x);
a = 0;
a = fabs(atan2(y1 - y0, x1 - x0));
if (a > .499f*pi)
{
curve.append(Vertex(Vector2f(x1, y1), Color::Transparent));
}
else
{
curve.append(Vertex(Vector2f(x1, y1), Color::Green));
}
}
Update 2:
The following code gets rid of the rogue line:
VertexArray curve(Lines, 1000);
float x,y;
for (x = -30 * pi; x < 30 * pi; x += .0005f)
{
y = -asin(x);
curve.append(Vertex(Vector2f(x, y)));
}
for (x = -30 * pi + .0005f; x < 30 * pi; x += .0005f)
{
y = -asin(x);
curve.append(Vertex(Vector2f(x, y)));
}
The first problem looks like a wrong polyline/curve handling. Don't know what API are you using for rendering but some like GDI need to start the pen position properly. For example if you draw like this:
Canvas->LineTo(x[0],y[0]);
Canvas->LineTo(x[1],y[1]);
Canvas->LineTo(x[2],y[2]);
Canvas->LineTo(x[3],y[3]);
...
Then you should do this instead:
Canvas->MoveTo(x[0],y[0]);
Canvas->LineTo(x[1],y[1]);
Canvas->LineTo(x[2],y[2]);
Canvas->LineTo(x[3],y[3]);
...
In case your API needs MoveTo command and you are not setting it then last position is used (or default (0,0)) which will connect start of your curve with straight line from last draw or default pen position.
Second problem
In continuous data you can threshold the asymptotes or discontinuities by checking the consequent y values. If your curve render looks like this:
Canvas->MoveTo(x[0],y[0]);
for (i=1;i<n;i++) Canvas->LineTo(x[i],y[i]);
Then you can change it to something like this:
y0=y[0]+2*threshold;
for (i=0;i<n;i++)
{
if (y[1]-y0>=threshold) Canvas->MoveTo(x[i],y[i]);
else Canvas->LineTo(x[i],y[i]);
y0=y[i];
}
The problem is selection of the threshold because it is dependent on x density of sampled points and on the first derivation of your y data by x (slope angles)
If you are stacking up more functions the curve append will create your unwanted line ... instead handle each data as separate draw or put MoveTo command in between them
[Edit1]
I see it like this (fake split):
double x0,y0,x1,y1,a;
for (e=1,x = -pi; x < pi; x += .0005f)
{
// last point
x0=x1; y0=y1;
// your actual point
x1=x; y1=-tan(x);
// test discontinuity
if (e) { a=0; e=0; } else a=fabs(atan2(y1-y0,x1-x0));
if (a>0.499*M_PI) curve.append(Vertex(Vector2f(x1,y1), Color::Black));
else curve.append(Vertex(Vector2f(x1,y1), Color::Green));
}
the 0.499*M_PI is you threshold the more closer is to 0.5*M_PIthe bigger jumps it detects... I faked the curve split by black color (background) it will create gaps on axis intersections (unless transparency is used) ... but no need for list of curves ...
Those artifacts are due to the way sf::PrimitiveType::LinesStrip works (or more specific lines strips in general).
In your second example, visualizing y = -tan(x), you're jumping from positive infinity to negative infinity, which is the line you're seeing. You can't get rid of this, unless you're using a different primitive type or splitting your rendering into multiple draw calls.
Imagine a line strip as one long thread you're pinning with pushpins (representing your vertices). There's no (safe) way to go from positive infinity to negative infinity without those artifacts. Of course you could move outside the visible area, but then again that's really specific to this one function.

Rendering Tilemap on the screen correctly

I'm having a strange problem rendering my level based on tilemap correctly.
On the y axis all the tiles are normal and aligned, instead on the x axis they seem to be divided by a space i can't figure out why...
I created a matrix with enum values(from 0 to 2) and i cycled my matrix in a for
loop to render the tile with the current number:
ex. GROUND = 0; etc...
Here is a photo of what it looks like
http://it.tinypic.com/r/ali261/8
Here is the sprite for the tile
http://it.tinypic.com/r/21kggw5/8
i will add the code down here.
for(int y = 0; y < 15; y++)
{
for(int x = 0; x < 20; x++)
{
if(map[y][x] == GROUND)
render(tileTex,x*64 - camera.x,y*64 - camera.y,&gTileSprite[0],0,NULL,SDL_FLIP_NONE);
else if(map[y][x] == UGROUND)
render(tileTex,x*64 - camera.x,y*64 - camera.y,&gTileSprite[1],0,NULL,SDL_FLIP_NONE);
else if(map[y][x] == SKY)
render(tileTex,x*64 - camera.x,y*64 - camera.y,&gTileSprite[2],0,NULL,SDL_FLIP_NONE);
tBox[y][x].x = x*64;
tBox[y][x].y = y*64;
tBox[y][x].w = TILE_WIDTH;
tBox[y][x].h = TILE_HEIGHT;
}
}
Further to the comments above, one must be careful to avoid any blurring along the edges of tiles, since their repetition will make any defects more obvious than if they were viewed in isolation.
Blurring may be introduced in the process of drawing portions of the tilemap to the final/intermediate target, or as seems (and has been confirmed) in this case, the source material may have blurred edges.
Particularly when working with images of such 'low` pixel dimensions, one must be vigilant and ensure that any/all resizing operations are performed in an image-editor without re-sampling.
While bilinear/cubic re-sampling may be desired when blitting the assembled image to the screen, it is never desirable for such re-sampling to happen to the source material.

Program crashes, because of std::rotate_copy

I am trying to display two cams next to each other, rotated by 90°. Displaying both cams works fine, but if I want to rotate the cams, the program crashes.
The camera is read with a QByteArray and shown with the QCamera variable.
You can choose which camera is displayed in which viewfinder, so it has a code like this:
QActionGroup *videoDevicesGroup = new QActionGroup(this);
videoDevicesGroup->setExclusive(true);
foreach(const QByteArray &deviceName, QCamera::availableDevices()) {
QString description = camera->deviceDescription(deviceName);
QAction *videoDeviceAction = new QAction(description, videoDevicesGroup);
videoDeviceAction->setCheckable(true);
videoDeviceAction->setData(QVariant(deviceName));
if (cameraDevice.isEmpty()) {
cameraDevice = deviceName;
videoDeviceAction->setChecked(true);
}
ui->menuDevices->addAction(videoDeviceAction);
}
connect(videoDevicesGroup, SIGNAL(triggered(QAction*)), SLOT(updateCameraDevice(QAction*)));
if (cameraDevice.isEmpty())
{
camera = new QCamera;
}
else
{
camera = new QCamera(cameraDevice);
}
connect(camera, SIGNAL(stateChanged(QCamera::State)), this, SLOT(updateCameraState(QCamera::State)));
connect(camera, SIGNAL(error(QCamera::Error)), this, SLOT(displayCameraError()));
camera->setViewfinder(ui->viewfinder);
updateCameraState(camera->state());
camera->start();
Now I'm trying to rotate this cam with the command:
std::roate_copy(cameraDevice.constBegin(), cameraDevice.constEnd(), cameraDevice.constEnd, reverse.begin());
camera = new QCamera(reverse);
But when I try to start the program the program crashes, without any errors.
How can I fix this?
I think you have a misunderstanding on what std::rotate_copy does.
std::rotate_copy takes a range of data and shifts it as it copies into the location pointed to by the result iterator.
This won't rotate a camera. It just shifts and copies ranges: http://www.cplusplus.com/reference/algorithm/rotate_copy
EDIT:
Think about it this way say I have: std::string("wroybgivb");
Now say I do str::rotate_copy and I pick the "y" as my middle the std::string that I copied into will contain "ybgivbwro".
Now think about that like I was working with a 3X3 image and each character represented a color:
wro ybg
ybg => ivb
ivb wro
Note that this is doing a linear array rotation (position shifting). I can never pick a middle such that rows will become columns and columns will become rows.
PS:
OK so say that you knew the width of the image, and assigned it to the variable width. You can do something like this to rotate 90° clockwise:
for(int x = 0; x < size; ++x){
output[width - 1 - i / width + (i % width) * width] = input[i];
}
To understand this you need to understand indexing a linear array as though it's a 2D array.
Use this to access the x coordinate: i % width
Use this to access the y coordinate: (i / width) * width
Now you need to take those indices and rotate them still inside a linear array.
Use this to access the x coordinate: width - 1 - i / width
Use this to access the y coordinate: (i % width) * width

Erase parts of image to make pixels transparent

I'm working in openFrameworks, a set of C++ libraries.
What I'm trying to do is simply 'erase' certain pixels of an ofImage (a class that loads/displays an image, and has access to it's pixel array) when a shape (an eraser) passes over the appropriate pixels of the image. Pretty simple stuff - I think - but I'm having a mental block!
ofImage has two methods - getPixels() and getPixelsRef() that seem to approach what I am trying to do, but the methodology I am using is not quite giving me the results I want.
Here is an example of an attempt to update the pixels of a foreground image from the pixels of a background image:
ofPixelsRef fore = foreground.getPixelsRef();
ofPixelsRef back = background.getPixelsRef();
for(int x = 0; x < foreground.getWidth()/2; x++) {
for (int y = 0; y < foreground.getHeight(); y++) {
ofColor c = back.getColor(x, y);
fore.setColor(x, y, c);
}
}
foreground.setFromPixels(fore);
and here is an attempt to statically colour the foreground with a predetermined colour (which I think is what I want to do, with a transparent white ?!?):
ofPixelsRef fore = foreground.getPixelsRef();
ofColor c(0, 127);
for(int x = 0; x < foreground.getWidth(); x++) {
for (int y = 0; y < foreground.getHeight(); y++) {
fore.setColor(x, y, c);
}
}
foreground.setFromPixels(fore);
Neither are quite where I want to get to, but I think they're a stab in the right direction.
If anyone has any ideas on where to proceed, I'm all ears.
I'd consider moving to the ofFbo class, or even GLSL if there's a clean lead/example.
Feel free to post vanilla C++ as well, and I'll see what I can do about porting it to oF.
Thanks,
~ Jesse
FYI, I've found a solution detailed at this page: http://forum.openframeworks.cc/index.php/topic,12899.0.html

Circular collision rebound not working properly

I'm writing a little physics simulation in C++ that basically moves circles across the screen and when two circles collide, they should ricochet in the same manner as billiard balls would. When the circles do collide with each other, most of the time they will practically slow down infinitely/they appear to stick to each other and become static. Sometimes only one ball will rebound in the collision and the other will retain it's trajectory. This is just a simple 2D simulation.
So here's what I have for the detection/ricochet logic:
bool Ball::BallCollision(Ball &b2)
{
if (sqrt(pow(b2.x - x, 2) + pow(b2.y - y, 2)) <= b2.radius + radius) // Test for collision
{
normal[0] = (x - (x + b2.x) / 2) / radius; // Finds normal vector from point of collision to radius
normal[1] = (y - (y + b2.y) / 2) / radius;
xvel = xvel - 2 * (xvel * normal[0]) * normal[0]; // Sets the velocity vector to the reflection vector
yvel = yvel - 2 * (yvel * normal[1]) * normal[1];
////x = xprev; // These just move the circle back a 'frame' so the collision
////y = yprev; // detection doesn't detect collision more than once.
// Not sure if working?
}
}
I can't figure out what is wrong with my function. Thanks for any help in advance!
Edit:
Every variable is declared as a float
The functions:
void Ball::Move()
{
xprev = x;
yprev = y;
x += xvel;
y += yvel;
}
void Ball::DrawCircle()
{
glColor3ub(100, 230, 150);
glBegin(GL_POLYGON);
for (int i = 0; i < 10; i++)
{
angle = i * (2*3.1415/10);
newx = x + r*cos(angle);
newy = y + r*sin(angle);
glVertex2f(newx, newy);
}
glEnd();
}
The loop:
run_prev.clear(); // A vector, cleared every loop, that holds the Ball objects that collided
for (int i = 0; i < num_balls; i++)
{
b[i].Move();
}
for (int i = 0; i < num_balls; i++)
{
b[i].WallCollision(); // Just wall collision detecting, that is working just fine
}
//The loop that checks for collisions... Am I executing this properly?
for (int i = 0; i < num_balls; i++)
{
for (int j = 0; j < num_balls; j++)
{
if (i == j) continue;
if (b[i].BallCollision(b[j]) == true)
{
run_prev.push_back(b[i]);
}
}
}
for (int i = 0; i < num_balls; i++)
{
b[i].DrawCircle();
}
//xprev and yprev are the x and y values of the frame before for each circle
for (int i = 0; i < run_prev.size(); i++)
{
run_prev[i].x = run_prev[i].xprev;
run_prev[i].y = run_prev[i].yprev;
}
Makes balls collide (reflect movement vector) only if they're moving towards each other. Do not process collision if they're moving away from each other. Break this rule, and they'll be glued together.
When processing collision, update both balls at once. Do not update one ball at a time.
Your move vector adjustment is incorrect. Balls don't reflect against each other, because they can be moving at different speeds.
Correct movement adjustment (assuming balls have equal mass) should look something like that:
pos1 and pos2 = positions;
v1 and v2 are movement vector (speed);
n is collision normal == normalize(pos1 - pos2);
collisionSpeed = dot((v2-v1), n);
collisionSpeed *= elasticy; (0.0..1.0);
v1 = v1 - dot(v1, n);
v2 = v2 - dot(v2, n);
v1 -= scale(n, collisionSpeed * 0.5);
v2 += scale(n, collisionSpeed * 0.5);
To understand the formula, check newtons law (impulses in particular). Or check Chris Hecker's papers on game physics.
It's not clear how you're calling this function, but I think I see the problem.
Say you have Ball ballA and Ball ballB, which are colliding in the current frame, and then you run ballA.BallCollision(ballB).
This will update the member variables of ballA, and move it back a frame. But it doesn't update the position or trajectory of ballB.
Even if you call the converse as well (ballB.BallCollision(ballA)), it won't detect the collision because when you called ballA.BallCollision(ballB), it moved ballA back a frame.
I haven't looked at your code in detail, but it doesn't take into consideration that this type of collision can only work in center of momentum frames. Now, I assume your balls are of equal masses. What you do is take the average of the two momentums (or velocities since they have the same masses) and subtract that average from the velocities. Perform your calculations, and add the average back. Here is the question I asked that may relate to this.
I know this question is quite old, but it's still relevant, especially to students. Something that wasn't mentioned in the answers made me want to contribute.
One thing that I ran into when solving a similar problem was overlap. That is, if the moving balls overlap by any amount at all, the collision detection will trigger continuously, giving the sticking behavior the OP referred to.
There was an attempt here to prevent this by moving the balls to the previous frame, but that can occasionally fail if the movement was enough that the balls enmeshed more than a single frame can account for, or if the movement velocity is just right so that the frame before doesn't trigger collision but the frame after is too far overlapped.
Since the original check was for center distance less than or equal to the sum of the radii, the detection triggers on both collision AND overlap.
One way to fix this is to separate the test into checking for collision (equals only) or overlap (less than only). For the collision, proceed as normal. But for the overlap condition, you can physically move one ball or the other (or both by half) the amount of overlap. This positions them at correct "collision" position, which allows for the correct behavior of the bounce function.
An overlap function that only moves one ball at a time might look something like this(not real code):
if (distanceBetweenBallCenters < sumOfRadii){
currentPosition = oldPosition - (distanceBetweenBallCenters - sumOfRadii) * (unitVectorFromSecondBallToFirstBall);
}
One could easily move both balls by half, but I found that moving one at a time gave satisfactory results for my uses, and allowed me to keep the parameter as a const.
I hope this helps future students! (I am also a student, and new to this, so take my advice with the proverbial grain of salt)
Your way of calculating the normal is wrong. (x + b2.x)/2 doesn't have to be the point of collision, if the radii of the balls aren't equal.