I am using QwtPlotBarChart to draw a histogram. I have set the chart up so that there is zero spacing between bars, and my xBottom axis has the real range of the data.
Samples are set on the chart using a QVector of QPointF, with the x values corresponding to the midpoints of the bins.
A QwtPlotPicker hovering over the bars shows that they start and end at the actual start and ends of the bins.
However, I am having trouble getting the ticks where the labels are drawn to show up in the correct place along the x axis. I am using a custom scaledraw, similar to the distrowatch example. I have an additional couple of parameters, namely the minimum value of the data range, and the bin size. My label index code looks like this:
virtual QwtText label( double value ) const
{
QwtText lbl;
//const int index = qRound( value );
const int index = (int)(value-_min_val)/_bin_size;
if ( index >= 0 && index < d_labels.size() )
{
lbl = d_labels[ index ];
}
return lbl;
}
This seems to be ok; my labels show the min and max values of each bin and a nearly in the right place.
I have tried to set the tick positions as follows:
QwtScaleDiv div = _plot->axisScaleDiv(QwtPlot::xBottom);
div.setTicks(QwtScaleDiv::MajorTick, x_data.toList());
_plot->setAxisScaleDiv(QwtPlot::xBottom, div);
here x_data is the vector of bin midpoints used to plot the data. The axis range is set using:
double half_bin = bin_size / 2.0;
_plot->setAxisScale(QwtPlot::xBottom, x_data.first() - half_bin, x_data.last() + half_bin, bin_size);
Any ideas what I'm missing? Depending on the number of bins, the ticks (and subsequently the labels) will occasionally line up with the mid point of the bins (or at least most of them), but as I change the bin count, the ticks will become more or less offset from the center of the bin.
Related
I'm making a game with C++ and SFML and was wondering if there's a way to iterate through specific elements in a vector. I have a vector of tiles which makes up the game world, but depending on the game map's size, (1000 x 1000 tiles) iterating through all of them seems very inefficient. I was wondering if there was a way to say "for each tile in vector of tiles that (fits a condition)". Right now, my code for drawing these tiles looks like this:
void Tile::draw()
{
for (const auto& TILE : tiles)
{
if (TILE.sprite.getGlobalBounds().intersects(Game::drawCuller.getGlobalBounds()))
{
Game::window.draw(TILE.sprite);
}
}
}
As you can see, I'm only drawing the tiles in the view (or drawculler). If the vector is too large, it will take a really long time to iterate through it. This greatly impacts my fps. When I have a 100 x 100 tile map, I get around 800 fps, but when I use a 1000 x 1000 tile map, I get roughly 25 fps due to the lengthy iteration. I know that I could separate my tiles into chunks and only iterate through the ones in the current chunk, but I wanted something a little easier to implement. Any help would be appreciated :)
Given the following assumptions:
Your tiles are likely arranged on a regular grid with a (column, row) index.
Your tiles are likely inserted into your vector in row-major order, and is also likely fully-populated. So the index of a tile in your vector is likely (row * numColumns + column).
Your view is likely axis-aligned to the grid (where you can't rotate your view - as is the case with many 2d tile-based games)
If those assumptions hold true, then you can easily iterate through the appropriate range of tiles with a nested loop.
for (int row = minRow; row <= maxRow; ++row) {
for( int column = numColumn; column <= maxColumn; ++column) {
int index = row * numColumns + column;
// Here you can...
doSomethingWith(tiles[index]);
}
}
This just requires that you can compute the minRow, maxRow, minColumn, and maxColumn from your Game::drawCuller.getGlobalBounds(). You haven't disclosed the details, but it's likely something like a rectangle in world coordinates (which might be in some units like meters). It's likely either a left, top, width, height style rectangle or a min, max style bounds rectangle. Assuming the latter:
minViewColumn = floor((bounds.minInMeters.x - originOfGridInMeters.x) / gridTileSizeInMeters);
maxViewColumn = ceil((bounds.maxInMeters.x - originOfGridInMeters.x) / gridTileSizeInMeters);
// similarly for rows
minViewRow = floor((bounds.minInMeters.y - originOfGridInMeters.y) / gridTileSizeInMeters);
maxViewRow = ceil((bounds.maxInMeters.y - originOfGridInMeters.y) / gridTileSizeInMeters);
The originOfGridInMeters is the global coordinates of top-left corner of the tile at (row=0, column=0), which may very well be (0, 0), conveniently, if you set up your world like that. And gridTileSizeInMeters is, well, just that; presumably your tiles have a square aspect ratio in world space.
If the view is permitted to go outside the extents of the tile array, minViewColumn, (and the other iterator ranges) may now be less than 0 or greater than or equal to the number of columns in your tile array. So, it would then be necessary to compute minColumn from minViewColumn by clipping it to the range of tiles stored in your grid. (Same goes for the other iteration extents.)
// Clip to the range of valid rows and columns.
minColumn = min(max(minViewColumn, 0), numColumns - 1);
maxColumn = min(max(maxViewColumn, 0), numColumns - 1);
minRow = min(max(minViewRow, 0), numRows - 1);
maxRow = min(max(maxViewRow, 0), numRows - 1);
Now do that loop I showed you above, and you're good to go!
I was wondering if there was a way to say "for each tile in vector of tiles that (fits a condition)
In general, no. The only way to know if an element fits a condition is to look at it and see if it fits the condition. You can't do that without iterating over all the elements and checking the condition for each.
The way to avoid this is to build some sort of index structure. For instance, if you have tiles with attributes that change rarely, you could pre-build vectors of pointers to all of your tiles with some attribute. That way you can check the condition only once (or rarely) instead of on each frame. For instance you could build separate vectors of all of your blue tiles, all of your red tiles, and all of your green tiles. Then if you want to iterate over all of the tiles of a certain color you could do "for each blue tile" directly instead of "for each tile, if it's blue". This generally trades storage/memory usage for execution speed.
The same concept applies to your specific situation, as you mentioned. You can pre-build caches of chunks, and quickly filter out whole chunks that aren't near your camera. This will prevent you from having to check every tile to see if it's in view.
Using code from question about picking nice rule/marker interval I created a code that renders rules on the graph.
It has nice intervals of 0.1. But I don't like to display all the numbers, instead, I'd like to increase rule density but only mark every few rules. Like this:
I created an algorithm that does so by multiplying rule interval by a number, then highlighting rules that can be divided by the result. I use fmod because the values can of course be float:
// See https://stackoverflow.com/q/361681/607407 for algorithms
double rule_spacing = tickSpacing(pixels_per_rule);
// These are the highlighted rules - currently it should be every 2nd rule
double important_steps = rule_spacing*2.0;
// Getting the stard draw offset which should be bellow bottom graph corner
double start = graph_math::roundTo(begin.y, rule_spacing);
LOGMTRTTIINFO("Legend from "<<start<<" to "<<values.maxYValue<<" by "<<rule_spacing<<", numbers: "<<important_steps<<'\n');
//Loop until on top
while(start<=values.maxYValue) {
int y = pixelForYValue(start);
// HERE: calculating whether this is the NTH rule!
float remainder = fmod(start, important_steps);
LOGMTRTTIINFO(" "<<" legend at px"<<y<<" with marker "<<start<<" Marker remainder:"<<remainder<<'\n');
if(remainder==0) {
// Draw highlighted rule
}
else {
// Draw normal rule
}
}
The problem is that fmod is rather unreliable. Check this log output where values that can be divided by 0.1 return 0.1 in fmod:
Legend from 95.9 to 96.3097 by 0.05, numbers (important_steps): 0.1
legend at px240 with marker 95.9 Marker remainder:3.60822e-16
legend at px211 with marker 95.95 Marker remainder:0.05
legend at px181 with marker 96 Marker remainder:0.1
legend at px152 with marker 96.05 Marker remainder:0.05
legend at px123 with marker 96.1 Marker remainder:0.1
legend at px93 with marker 96.15 Marker remainder:0.05
legend at px64 with marker 96.2 Marker remainder:0.1
legend at px35 with marker 96.25 Marker remainder:0.05
legend at px5 with marker 96.3 Marker remainder:0.1
I guess I could outsmart this by adding important_steps==remainder, but isn't the function flawed if it actually returns the denominator, which should never happen for % (modulo)?
How do I overcome this with sufficient certainty? Testing snippet available.
Btw, this is how nicely it works once the important_steps is greater or equal to 1:
Perhaps the best way to do this is to round to integers, and then check the difference, like this:
float quotient = start / important_step;
int quot_int = round(quotient);
if (abs(quotient - quot_int) < 1e-10)
{
// Draw highlighted rule
}
else
{
// Draw normal rule
}
EDIT: function round
int round(double x){return floor(x + 0.5);}
Some sample code about image processing using OpenCV give somethings like this:
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
if(pointPolygonTest(Point(i,j),myPolygon))
{
// do some processing
}
}
}
In the iteration, why we need to start from height and width? and also why the Point is store (height, width) so that is -> (y,x) ?
Ranges between [0..Height] and [0..Width] are maximum boundaries of your working area.
This code is testing which pixels of whole image are inside the polygon myPolygon.
The word "whole" means you should check all pixels of your image so you should iterate from 0 to height for Y, and iterate from 0 to width for X.
Actually here, the row/column convention is used to iterate over the whole image.
height = Number of Rows
width = Number of Columns
The image is being accessed row wise.The outer loop is iterating over rows of the image and the inner loop is iterating on columns. So basically i is the current row and j is the current column of the image.
The inner loop processes a complete row of the image.
I'm trying to figure out how to traverse a 2.5D grid in an efficient manner. The grid itself is 2D, but each cell in the grid has a float min/max height. The line to traverse is defined by two 3D floating point coordinates. I want to stop traversing the line if the range of z values between entering/exiting a grid cell doesn't overlap with the min/max height for that cell.
I'm currently using the 2D DDA algorithm to traverse through the grid cells in order(see picture), but I'm not sure how to calculate the z value when each grid cell is reached. If I could do that, I could test the z value when entering/leaving the cell against the min/max height for the cell.
Is there a way to modify this algorithm that allows z to be calculated when each grid cell is entered? Or is there a better traversal algorithm that would allow me to do that?
Here's the current code I'm using:
void Grid::TraceGrid(Point3<float>& const start, Point3<float>& const end, GridCallback callback )
{
// calculate and normalize the 2D direction vector
Point2<float> direction=end-start;
float length=direction.getLength( );
direction/=length;
// calculate delta using the grid resolution
Point2<float> delta(m_gridresolution/fabs(direction.x), m_gridresolution/fabs(direction.y));
// calculate the starting/ending points in the grid
Point2<int> startGrid((int)(start.x/m_gridresolution), (int)(start.y/m_gridresolution));
Point2<int> endGrid((int)(end.x/m_gridresolution), (int)(end.y/m_gridresolution));
Point2<int> currentGrid=startGrid;
// calculate the direction step in the grid based on the direction vector
Point2<int> step(direction.x>=0?1:-1, direction.y>=0?1:-1);
// calculate the distance to the next grid cell from the start
Point2<float> currentDistance(((step.x>0?start.x:start.x+1)*m_gridresolution-start.x)/direction.x, ((step.y>0?start.y:start.y+1)*m_gridresolution-start.y)/direction.y);
while(true)
{
// pass currentGrid to the callback
float z = 0.0f; // need to calculate z value somehow
bool bstop=callback(currentGrid, z);
// check if the callback wants to stop or the end grid cell was reached
if(bstop||currentGrid==endGrid) break;
// traverse to the next grid cell
if(currentDistance.x<currentDistance.y) {
currentDistance.x+=delta.x;
currentGrid.x+=step.x;
} else {
currentDistance.y+=delta.y;
currentGrid.y+=step.y;
}
}
}
It seems like a 3D extension of the Bresenham Line Algorithm would work. You would iterate over X and independently track the error for the Y and Z components of your line segment to determine the Y and Z values for each corresponding X value. You just stop when the accumulated error in Z reaches some critical level which would indicate it is outside of your min/max.
For each cell, you know from which cell you came from. This means you know from which side you came from. Calculating z at the intersection of the green line and a given grid line seems trivial.
I figured out a good way to do it. Add to the start of the function:
float fzoffset=end.z-start.z;
Point2<float> deltaZ(fzoffset/fabs(end.x-start.x), fzoffset/fabs(end.y-start.y));
Point2<float> currentOffset((step.x>0?start.x:start.x+1)*m_gridresolution-start.x, (step.y>0?start.y:start.y+1)*m_gridresolution-start.y);
Inside the loop where currentDistance.x/.y are incremented, add:
currentOffset.x+=m_gridresolution; //When stepping in the x axis
currentOffset.y+=m_gridresolution; //When stepping in the y axis
Then to calculate z at each step:
z=currentOffset.x*deltaZ.x+start.z; //When stepping in the x axis
z=currentOffset.y*deltaZ.y+start.z; //When stepping in the y axis
I have an array that represents a grid
For the sake of this example we will start the array at 1 rather that 0 because I realized after doing the picture, and can't be bothered to edit it
In this example blue would have an index of 5, green an index of 23 and red 38
Each color represents an object and the array index represents where the object is. I have implemented very simple gravity, whereby if the grid underneath is empty x + (WIDTH * (y + 1)) then the grid below is occupied by this object, and the grid that the object was in becomes empty.
This all works well in its current form, but what I want to do is make it so that red is the gravity point, so that in this example, blue will move to array index 16 and then 27.
This is not too bad, but how would the object be able to work out dynamically where to move, as in the example of the green grid? How can I get it to move to the correct index?
Also, what would be the best way to iterate through the array to 'find' the location of red? I should also note that red won't always be at 38
Any questions please ask, also thank you for your help.
This sounds very similar to line rasterization. Just imagine the grid to be a grid of pixels. Now when you draw a line from the green point to the red point, the pixels/cells that the line will pass are the cells that the green point should travel along, which should indeed be the shortest path from the green point to the red point along the discrete grid cells. You then just stop once you encounter a non-empty grid cell.
Look for Bresenham's algorithm as THE school book algorithm for line rasterization.
And for searching the red point, just iterate over the array linearly until you have it and then keep track of its grid position, like William already suggested in his answer.
x = x position
y = y position
cols = number of columns across in your grid
(y * cols) + x = index in array absolute value for any x, y
you could generalize this in a function:
int get_index(int x, int y, int gridcols)
{
return (gridcols * y) + x;
}
It should be noted that this works for ZERO BASED INDICES.
This is assuming I am understanding what you're talking about at all...
As for the second question, for any colored element you have, you should keep a value in memory (possibly stored in a structure) that keeps track of its position so you don't have to search for it at all.
struct _THING {
int xpos;
int ypos;
};
Using the get_index() function, you could find the index of the grid cell below it by calling like this:
index_below = get_index(thing.x, thing.y + 1, gridcols);
thing.y++; // increment the thing's y now since it has moved down
simple...
IF YOU WANT TO DO IT IN REVERSE, as in finding the x,y position by the array index, you can use the modulus operator and division.
ypos = array_index / total_cols; // division without remainder
xpos = array_index % total_cols; // gives the remainder
You could generalize this in a function like this:
// x and y parameters are references, and return values using these references
void get_positions_from_index(int array_index, int total_columns, int& x, int& y)
{
y = array_index / total_columns;
x = array_index % total_columns;
}
Whenever you're referring to an array index, it must be zero-based. However, when you are referring to the number of columns, that value will be 1-based for the calculations. x and y positions will also be zero based.
Probably easiest would be to work entirely in a system of (x,y) coordinates to calculate gravity and switch to the array coordinates when you finally need to lookup and store objects.
In your example, consider (2, 4) (red) to be the center of gravity; (5, 1) (blue) needs to move in the direction (2-5, 4-1) == (-3, 3) by the distance _n_. You get decide how simple you want n to be -- it could be that you move your objects to an adjoining element, including diagonals, so move (blue) to (5-1, 1+1) == (4, 2). Or perhaps you could move objects by some scalar multiple of the unit vector that describes the direction you need to move. (Say, heavier objects move further because the attraction of gravity is stronger. Or, lighter objects move further because they have less inertia to overcome. Or objects move further the closer they are to the gravity well, because gravity is an inverse square law).
Once you've sorted out the virtual coordinates of your universe, then convert your numbers (4, 2) via some simple linear formulas: 4*columns + 2 -- or just use multidimensional arrays and truncate your floating-point results to get your array indexes.