Calculate scrollbar height in grid with varied row height - c++

I've grid with a lot of rows (e.g. 1 000 000). Height of each row may be unique. But most of rows has same height. So it's not possible to determine height of each row and get total grid height.
I need implement smooth vertical scrolling over this grid, not only jump over row, because row can be higher than visible area.
My solution is:
get number of rows
each row is divided into 10 parts
=> scroll bar max value is (number of rows)*10
from scroll position I get :
first visible row = (scroll position) / 10
first visible row shift = (scroll position) % 10
This work fine, if all rows has +- same height. If there is one row with height 500 px and other has 25 px scroll looks awful.
Has anybody suggestion how to better solve this problem?
Grid is here :
http://img560.imageshack.us/img560/7775/scroll.png

Let the scroll be in pixel units:
Sum the total height of all the rows and set the scrollbar max value to that value.
Cache the first visible row index in a variable.
When the user scrolls up or down, you can scan sequentially from the current first visible row to find the new one. This gives amortized constant-time work per update for sequential read.
You won't do random access (e.g. scroll to row number N) frequently so doing a linear search when you do is fine. If you need something faster (I doubt that) then you can pre compute the partial sums of row heights and do binary search.

Related

Number of ways to color exactly K cells in a 3xN matrix such that no two colored cell are adjacent(do not share edges)?

I tried to solve this problem using Dynamic Programming but it seems I am missing some cases that I am unable to find.
Here is the equation that I used for getting values from sub-problem
dp[i][j] = dp[i][j-1] + 3*(dp[i-1][j-1] - dp[i-2][j-2]) + dp[i-3][j-2]
(i = k = no of cells to be colored and j = n = number of columns, note the row is fixed i.e 3)
The terms are as defined below:
dp[i][j-1] : case when I don't color any cell in the nth column.
dp[i-1][j-1] - dp[i-2][j-2] : case when I color one cell in the last column and then have to subtract the case where I color the adjacent cell in the n-1th column and since this can be done for each of the 3 cells in the nth column I multiplied it with 3.
dp[i-3][j-2] : case when I color two cells(top and bottom ones) in the nth column and thus have only one choice for the n-1th column, that is the middle one, hence subtracting 3 from i and since we have already considered the last two columns I reduce 2 from j.
I couldn't find any mistake in the above approach, If you see any mistake please help.
Below is the actual question where an extra condition of P consecutive column not be empty is also mentioned and should be taken care of.
My approach is to first find all the possible ways to color k cells in 3xN matrix such that they are not adjacent and then finding the number of ways where P consecutive columns exist such that there are no cells colored in them and subtracting it with the total count, but in this approach, I'm missing the correct answer by a small margin for smaller inputs and a large margin for larger inputs. I must be missing something here.

Tkinter - Columns of equal weight are NOT equal width

I have a Tkinter Toplevel window with three columns. All three columns are configured to have equal weight. Inside column 0 and 2 are sub-frames, inside which are Listbox widgets. Inside column 1 is a set of buttons. For some reason, despite the fact that my 3 columns have equal weight, these Listboxes 'force' their columns to occupy more space.
I've written,
window.columnconfigure(0,weight=1)
window.columnconfigure(1,weight=1)
window.columnconfigure(2,weight=1)
But I get:
I've also given column 1 weights of 3 and 5, but it still remains small. However, having done this, it seems that columns 0 and 2 have some minimum size, then after subtracting that from the real width, the leftover width is used and divided by weight.
Is this a bug? Is there something I need to do to my lists? Might I be forgetting something?
It is not a bug. weight determines how extra space is allocated. It doesn't make any guarantees about the size of a row or column.
If you want columns to have a uniform width, use the uniform option and make them all be part of the same uniform group.
window.columnconfigure(0,weight=1, uniform='third')
window.columnconfigure(1,weight=1, uniform='third')
window.columnconfigure(2,weight=1, uniform='third')
Note: there is nothing special about 'third' -- it can be any string as long as it's the same string for all three columns.

Pack smaller Rectangle in Bigger One with highest Repeat Count?

i have a canvas rectangle ( constant width and height ) , i have child rectangle (also constant width and height) .
i want to fit the smaller rectangle in canvas with highest repeat count (or least wastage space Or maximize occupancy ratio ) .
when i tried famous algorithms like GuillotineBinPack or MaxRectsBinPack to fit lets say 25*20 rectangle in 70*100 all of them give me maximum of 13 rectangle instead of optimal result of 14 ( 5 first row + 5 second row + 4 third row).
Note : i tried all possible Heuristic permutations available with algorithm and even failed to achieve my optimal goal .
any small hint will highly appreciated.

How to Calculate width of the middle 98% mass of the gray level histogram of a image

I need to calculate the contrast of an color image, so the steps that was given to me are,
computed the histogram for RGB channel separately and combined it together as Histogram = histOfRedC + histOfBlueC + histOfgreenC.
normalize it to unit length, as each image is of different size.
The contrast quality, is equal to the width of the middle 98% mass of the histogram.
I have done the first 2 steps but unable to understand what to compute in 3rd step. Can somebody please explain me what it means?
Let the total mass of the histogram be M.
Accumulate the mass in the bins, starting from index zero, until you pass 0.01 M. You get an index Q01.
Decumulate the mass in the bins, starting from the maximum index, until you pass 0.99 M. You get an index Q99.
These indexes are the so-called first and last percentiles. The contrast is estimated as Q99-Q01.

HOG: What is done in the contrast-normalization step?

According to the HOG process, as described in the paper Histogram of Oriented Gradients for Human Detection (see link below), the contrast normalization step is done after the binning and the weighted vote.
I don't understand something - If I already computed the cells' weighted gradients, how can the normalization of the image's contrast help me now?
As far as I understand, contrast normalization is done on the original image, whereas for computing the gradients, I already computed the X,Y derivatives of the ORIGINAL image. So, if I normalize the contrast and I want it to take effect, I should compute everything again.
Is there something I don't understand well?
Should I normalize the cells' values?
Is the normalization in HOG not about contrast anyway, but is about the histogram values (counts of cells in each bin)?
Link to the paper:
http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf
The contrast normalization is achieved by normalization of each block's local histogram.
The whole HOG extraction process is well explained here: http://www.geocities.ws/talh_davidc/#cst_extract
When you normalize the block histogram, you actually normalize the contrast in this block, if your histogram really contains the sum of magnitudes for each direction.
The term "histogram" is confusing here, because you do not count how many pixels has direction k, but instead you sum the magnitudes of such pixels. Thus you can normalize the contrast after computing the block's vector, or even after you computed the whole vector, assuming that you know in which indices in the vector a block starts and a block ends.
The steps of the algorithm due to my understanding - worked for me with 95% success rate:
Define the following parameters (In this example, the parameters are like HOG for Human Detection paper):
A cell size in pixels (e.g. 6x6)
A block size in cells (e.g. 3x3 ==> Means that in pixels it is 18x18)
Block overlapping rate (e.g. 50% ==> Means that both block width and block height in pixels have to be even. It is satisfied in this example, because the cell width and cell height are even (6 pixels), making the block width and height also even)
Detection window size. The size must be dividable by a half of the block size without remainder (so it is possible to exactly place the blocks within with 50% overlapping). For example, the block width is 18 pixels, so the windows width must be a multiplication of 9 (e.g. 9, 18, 27, 36, ...). Same for the window height. In our example, the window width is 63 pixels, and the window height is 126 pixels.
Calculate gradient:
Compute the X difference using convolution with the vector [-1 0 1]
Compute the Y difference using convolution with the transpose of the above vector
Compute the gradient magnitude in each pixel using sqrt(diffX^2 + diffY^2)
Compute the gradient direction in each pixel using atan(diffY / diffX). Note that atan will return values between -90 and 90, while you will probably want the values between 0 and 180. So just flip all the negative values by adding to them +180 degrees. Note that in HOG for Human Detection, they use unsigned directions (between 0 and 180). If you want to use signed directions, you should make a little more effort: If diffX and diffY are positive, your atan value will be between 0 and 90 - leave it as is. If diffX and diffY are negative, again, you'll get the same range of possible values - here, add +180, so the direction is flipped to the other side. If diffX is positive and diffY is negative, you'll get values between -90 and 0 - leave them the same (You can add +360 if you want it positive). If diffY is positive and diffX is negative, you'll again get the same range, so add +180, to flip the direction to the other side.
"Bin" the directions. For example, 9 unsigned bins: 0-20, 20-40, ..., 160-180. You can easily achieve that by dividing each value by 20 and flooring the result. Your new binned directions will be between 0 and 8.
Do for each block separately, using copies of the original matrix (because some blocks are overlapping and we do not want to destroy their data):
Split to cells
For each cell, create a vector with 9 members (one for each bin). For each index in the bin, set the sum of all the magnitudes of all the pixels with that direction. We have totally 6x6 pixels in a cell. So for example, if 2 pixels have direction 0 while the magnitude of the first one is 0.231 and the magnitude of the second one is 0.13, you should write in index 0 in your vector the value 0.361 (= 0.231 + 0.13).
Concatenate all the vectors of all the cells in the block into a large vector. This vector size should of course be NUMBER_OF_BINS * NUMBER_OF_CELLS_IN_BLOCK. In our example, it is 9 * (3 * 3) = 81.
Now, normalize this vector. Use k = sqrt(v[0]^2 + v[1]^2 + ... + v[n]^2 + eps^2) (I used eps = 1). After you computed k, divide each value in the vector by k - thus your vector will be normalized.
Create final vector:
Concatenate all the vectors of all the blocks into 1 large vector. In my example, the size of this vector was 6318