Julia set c++ colours - c++

I'm currently working on an assignment where I have to produce a Julia set in C++ in sequential, parallel and OpenCL. I have managed to produce an image but the way I have used colours is very ineffective any ideas on how I could improve the colour section of my code at the moment? below is the sequential section of my code any help in improving how I have set the colours would be much appreciated
void sequentialJulia(const complex<float> C, const UINT size = 1000,
const UINT MAX_ITERATIONS = 100, const float limit = 1.7f) {
int start_s = clock();// starts the timer
// Setup output image
fipImage outputImage;
outputImage = fipImage(FIT_BITMAP, size, size, 24);
UINT bytesPerElement = 3;
BYTE* outputBuffer = outputImage.accessPixels();
vector<int> colors{ 100, 140, 180, 220, 225 };// this sets the intsity of the image, if i was to remove 225 the image would be darker
//vector<int> colors{9, 19, 29, 39, 49 }; //THIS DOESNT WORK DO NOT UNCOMMENT
//RGBQUAD color;
complex<float> Z;
std::cout << "Processing...\n";
for (UINT y = 0; y < size; y++) {
//tracking progress;
cout << y * 100 / size << "%\r";
cout.flush();
for (UINT x = 0; x < size; x++) {
Z = complex<float>(-limit + 2.0f * limit / size * x, -limit + 2.0f * limit / size * y);
UINT i;
for (i = 0; i < MAX_ITERATIONS; i++) {
Z = Z * Z + C;
if (abs(Z) > 2.0f) break;
}
if (i < MAX_ITERATIONS ) { //only changing red byte
// bytes per element 9 = blue
// bytes per element 2 = red
// bytes per element 7 = green
outputBuffer[( y * size + x) * bytesPerElement + 9] = colors[i % 5];
}
}
}
cout << "Saving image...\n";
ostringstream name;
name << "..\\Images\\" << C << " size=" << size << " mIterations=" << MAX_ITERATIONS << " sequential19.png" ;
cout << "saving in: " << name.str().c_str() << "\n";
outputImage.save(name.str().c_str());
cout << "...done\n\n";
int stop_s = clock();
cout << "time: " << (stop_s - start_s) / double(CLOCKS_PER_SEC) * 1000 << endl;// stops the timer once code has executed
}

As far as I remember, fractal generators from the early 90's (e.g.: Fractint) used the iteration-bailout index as an index into a table of 256 Red-Green-Blue colours (This was a common limit, as most displays back then were limited to a colour palette of this size anyway.)
So maybe you could define a table of RGB-colours, then lookup on these up exactly how you perform the colors[i % 5]; now, except it would output a RGB-triple of colours[i % TABLE_SIZE].red, .green, .blue. I think it would be best to load your palette in from a separate file.
I've always wondered what a fractal with a 1000-entry colour palette might look like. Quite pretty I think.
EDIT: IIRC Fractint had a palette editing mode, and could save them to files.

In addition to the excellent idea of using a look-up table, you can also interpolate between values in the table instead of just doing a modulus operation to pick one. So you could have a 5-color look-up table, but apply it to hundreds or thousands of iterations by linearly interpolating between the 5 colors. For example, if you have a maximum iteration of 256 and your current calculation takes 168 iterations to escape to infinity, and you have a 5-color look-up table, you could do this to get a color:
float lookupVal = static_cast<float>((colors.size - 1) * i) / MAX_ITERATIONS;
int lookupIndex = static_cast<int>(floor(lookupValue));
float fraction = lookupVal - floor(lookupVal);
float colorF = static_cast<float>(colors [ lookupIndex ]) + fraction * static_cast<float>(colors [ lookupIndex + 1 ] - colors [ lookupIndex ]);
uint8_t color = static_cast<uint8_t>(colorF);
If your look-up table had RGB values instead of just grayscale, you would need to calculate colorF and color for each color channel (red, green, and blue).

Related

Audio samples to musical note detection issue

I'm trying to setup a pipeline allowing me to detect musical notes from audio samples, but the input layer where I identify the frequency content of the samples does not land on the expected values. In the example below I...
build what I expect to be a 440Hz (A4) sine wave in the FFTW input buffer
apply the Hamming window function
lookup the first half the output bins to find the 4 top values and their frequency
void GenerateSinWave(fftw_complex* outputArray, int N, double frequency, double samplingRate)
{
double sampleDurationSeconds = 1.0 / samplingRate;
for (int i = 0; i < N; ++i)
{
double sampleTime = i * sampleDurationSeconds;
outputArray[i][0] = sin(M_2_PI * frequency * sampleTime);
}
}
void HammingWindow(fftw_complex* array, int N)
{
static const double a0 = 25.0 / 46.0;
static const double a1 = 1 - a0;
for (int i = 0; i < N; ++i)
array[i][0] *= a0 - a1 * cos((M_2_PI * i) / N);
}
int main()
{
const int N = 4096;
double samplingRate = 44100;
double A4Frequency = 440;
fftw_complex in[N] = { 0 };
fftw_complex out[N] = { 0 };
fftw_plan plan = fftw_plan_dft_1d(N, 0, 0, FFTW_FORWARD, FFTW_ESTIMATE);
GenerateSinWave(in, N, A4Frequency, samplingRate);
HammingWindow(in, N);
fftw_execute_dft(plan, in, out);
// Find the 4 top values
double binHzRange = samplingRate / N;
for (int i = 0; i < 4; ++i)
{
double maxValue = 0;
int maxBin = 0;
for (int bin = 0; bin < (N/2); ++bin)
{
if (out[bin][0] > maxValue)
{
maxValue = out[bin][0];
maxBin = bin;
}
}
out[maxBin][0] = 0; // remove value for next pass
double binMidFreq = (maxBin * binHzRange) + (binHzRange / 2);
std::cout << (i + 1) << " -> Freq: " << binMidFreq << " Hz - Value: " << maxValue << "\n";
}
fftw_destroy_plan(plan);
}
I was expecting something close to 440 or lower/higher harmonics, however the results are far from that:
1 -> Freq: 48.4497Hz - Value: 110.263
2 -> Freq: 59.2163Hz - Value: 19.2777
3 -> Freq: 69.9829Hz - Value: 5.68717
4 -> Freq: 80.7495Hz - Value: 2.97571
This flow is mostly inspired by this other SO answer. I feel that my lack of knowledge about signal processing might be in cause! My sin wave generation and window function seem to be ok, but audio analysis and FFTW are full of mysteries...
Any insight about how to improve my usage of FFTW, approach signal processing or simply write better code is appreciated!
EDIT: fixed integer division leading to Hamming a0 parameter always being 0. Results changed a little, but still far of the expected 440 Hz
I think you've misunderstood the M_2_PI constant in your GenerateSinWave function. M_2_PI is defined as 2.0 / PI.
You should be using 2 * M_PI instead.
This mistake will mean that your generated signal has a frequency of only around 45 Hz. This should be close to the output frequencies you are seeing.
The same constant needs correcting in your HammingWindow function too.

Mandelbrot Slicing Image to Improve Speed

The Mandelbrot set currently displays the image in one whole set by calling the function from the main.
// This shows the whole set.
compute_mandelbrot(-2.0, 1.0, 1.125, -1.125);
My plan is to split the image up into 16 horizontal slices and then display it to improve the speed as can then parallel program this in.
I'm unsure how to create these slices, can someone explain, redirect me or show some example code
image details:
// The size of the image to generate.
const int WIDTH = 100;
const int HEIGHT = 100;
// The number of times to iterate before we assume that a point isn't in the
// Mandelbrot set.
const int MAX_ITERATIONS = 500;
For the purpose of testing, ill send the full code, there are no errors - it is not coded efficiently evidently as the whole process takes over 30 seconds to output, which is way too long for a Mandelbrot set, hence the urgency of the slicing and parallel programming.
If anyone has any other pointers then they would be greatly appreciated
e.g. where to implement parallel programming
using std::chrono::duration_cast;
using std::chrono::milliseconds;
using std::complex;
using std::cout;
using std::endl;
using std::ofstream;
// Define the alias "the_clock" for the clock type we're going to use.
typedef std::chrono::steady_clock the_clock;
// The size of the image to generate.
const int WIDTH = 100;
const int HEIGHT = 100;
// The number of times to iterate before we assume that a point isn't in the
// Mandelbrot set.
const int MAX_ITERATIONS = 500;
// The image data.
// Each pixel is represented as 0xRRGGBB.
uint32_t image[HEIGHT][WIDTH];
// Write the image to a TGA file with the given name.
// Format specification: http://www.gamers.org/dEngine/quake3/TGA.txt
void write_tga(const char *filename)
{
ofstream outfile(filename, ofstream::binary);
uint8_t header[18] = {
0, // no image ID
0, // no colour map
2, // uncompressed 24-bit image
0, 0, 0, 0, 0, // empty colour map specification
0, 0, // X origin
0, 0, // Y origin
WIDTH & 0xFF, (WIDTH >> 8) & 0xFF, // width
HEIGHT & 0xFF, (HEIGHT >> 8) & 0xFF, // height
24, // bits per pixel
0, // image descriptor
};
outfile.write((const char *)header, 18);
for (int y = 0; y < HEIGHT; ++y)
{
for (int x = 0; x < WIDTH; ++x)
{
uint8_t pixel[3] = {
image[y][x] & 0xFF, // blue channel
(image[y][x] >> 8) & 0xFF, // green channel
(image[y][x] >> 16) & 0xFF, // red channel
};
outfile.write((const char *)pixel, 3);
}
}
outfile.close();
if (!outfile)
{
// An error has occurred at some point since we opened the file.
cout << "Error writing to " << filename << endl;
exit(1);
}
}
// Render the Mandelbrot set into the image array.
// The parameters specify the region on the complex plane to plot.
void compute_mandelbrot(double left, double right, double top, double bottom)
{
for (int y = 0; y < HEIGHT; ++y)
{
for (int x = 0; x < WIDTH; ++x)
{
// Work out the point in the complex plane that
// corresponds to this pixel in the output image.
complex<double> c(left + (x * (right - left) / WIDTH),
top + (y * (bottom - top) / HEIGHT));
// Start off z at (0, 0).
complex<double> z(0.0, 0.0);
// Iterate z = z^2 + c until z moves more than 2 units
// away from (0, 0), or we've iterated too many times.
int iterations = 0;
while (abs(z) < 2.0 && iterations < MAX_ITERATIONS)
{
z = (z * z) + c;
++iterations;
}
/*if (iterations == MAX_ITERATIONS)
{
// z didn't escape from the circle.
// This point is in the Mandelbrot set.
image[y][x] = 0x58DC77; // green
}*/
if (iterations <= 10)
{
// z didn't escape from the circle.
// This point is in the Mandelbrot set.
image[y][x] = 0xA9C3F6; // light blue
}
else if (iterations <=100)
{
// This point is in the Mandelbrot set.
image[y][x] = 0x36924B; // darkest green
}
else if (iterations <= 200)
{
// This point is in the Mandelbrot set.
image[y][x] = 0x5FB072; // lighter green
}
else if (iterations <= 300)
{
// z didn't escape from the circle.
// This point is in the Mandelbrot set.
image[y][x] = 0x7CD891; // mint green
}
else if (iterations <= 450)
{
// z didn't escape from the circle.
// This point is in the Mandelbrot set.
image[y][x] = 0x57F97D; // green
}
else
{
// z escaped within less than MAX_ITERATIONS
// iterations. This point isn't in the set.
image[y][x] = 0x58DC77; // light green
}
}
}
}
int main(int argc, char *argv[])
{
cout << "Processing" << endl;
// Start timing
the_clock::time_point start = the_clock::now();
// This shows the whole set.
compute_mandelbrot(-2.0, 1.0, 1.125, -1.125);
// This zooms in on an interesting bit of detail.
//compute_mandelbrot(-0.751085, -0.734975, 0.118378, 0.134488);
// Stop timing
the_clock::time_point end = the_clock::now();
// Compute the difference between the two times in milliseconds
auto time_taken = duration_cast<milliseconds>(end - start).count();
cout << "Computing the Mandelbrot set took " << time_taken << " ms." << endl;
write_tga("output.tga");
return 0;
}
Lets say you want to use N parallel threads for the rendering, then each thread will handle HEIGHT / N lines.
For simplicities sake I pick an N that divides your HEIGHT evenly, like 5. That means each thread will handle 20 lines each (with your HEIGHT being equal to 100).
You could implement it something like this:
constexpr int THREADS = 5; // Our "N", divides HEIGHT evenly
void compute_mandelbrot_piece(double left, double right, double top, double bottom, unsigned y_from, unsigned y_to)
{
for (unsigned y = y_from; y < y_to; ++y)
{
for (unsigned x = 0; y < WIDTH; ++x)
{
// Existing code to calculate value for y,x
// ...
}
}
}
void compute_mandelbrot(double left, double right, double top, double bottom)
{
std::vector<std::thread> render_threads;
render_threads.reserve(THREADS); // Allocate memory for all threads, keep the size zero
// Create threads, each handling part of the image
for (unsigned y = 0; y < HEIGHT; y += HEIGHT / THREADS)
{
render_threads.emplace_back(&compute_mandelbrot_piece, left, right, top, bottom, y, y + HEIGHT / THREADS);
}
// Wait for the threads to finish, and join them
for (auto& thread : render_threads)
{
thread.join();
}
// Now all threads are done, and the image should be fully rendered and ready to save
}

Reading 16 bit DPX Pixel Data

I'm trying to read in pixel data from a 16 bit dpx file that is an extension from a previous git repo (as it only supports 10 bit).
This is the dpx format summary
I'm utilizing this header and cpp to deal with the header info and getting that sort of data.
Note that the variables _pixelOffset, _width, _height, and _channels are based off header information of the dpx. pPixels is a float* array:
#include <iostream>
#include <fstream>
#include <dpxHeader.h>
//First read the file as binary.
std::ifstream _in(_filePath.asChar(), std::ios_base::binary);
// Seek to the pixel offset to start reading where the pixel data starts.
if (!_in.seekg (_pixelOffset, std::ios_base::beg))
{
std::cerr << "Cannot seek to start of pixel data " << _filePath << " in DPX file.";
return MS::kFailure;
}
// Create char to store data of width length of the image
unsigned char *rawLine = new unsigned char[_width * 4]();
// Iterate over height pixels
for (int y = 0; y < _height; ++y)
{
// Read full pixel data for width.
if (!_in.read ((char *)&rawLine[0], _width * 4))
{
std::cerr << "Cannot read scan line " << y << " " << "from DPX file " << std::endl;
return MS::kFailure;
}
// Iterator over width
for (int x = 0; x < _width; ++x)
{
// We do this to flip the image because it's flipped vertically when read in
int index = ((_height - 1 - y) * _width * _channels) + x * _channels;
unsigned int word = getU32(rawLine + 4 * x, _byteOrder);
pPixels[index] = (((word >> 22) & 0x3ff)/1023.0);
pPixels[index+1] = (((word >> 12) & 0x3ff)/1023.0);
pPixels[index+2] = (((word >> 02) & 0x3ff)/1023.0);
}
}
delete [] rawLine;
This currently works for 10 bit files, but as I am new the bitwise operations I'm not completely sure how to extend this to 12 and 16 bit. Anyone have any clues or a proper direction to me in?
This file format is somewhat comprehensive, but if you are only targeting a known subset it shouldn't be too hard to extend.
From your code sample it appears that you are currently working with three components per pixel, and that the components are filled into 32-bit words. In this mode both 12-bit and 16-bit will store two components per word, according to the specification you've provided. For 12-bit, the upper 4 bits of each component is padding data. You will need three 32-bit words to get six color components to decode two pixels:
...
unsigned int word0 = getU32(rawLine + 6 * x + 0, _byteOrder);
unsigned int word1 = getU32(rawLine + 6 * x + 4, _byteOrder);
unsigned int word2 = getU32(rawLine + 6 * x + 8, _byteOrder);
// First pixel
pPixels[index] = (word0 & 0xffff) / (float)0xffff; // (or 0xfff for 12-bit)
pPixels[index+1] = (word0 >> 16) / (float)0xffff;
pPixels[index+2] = (word1 & 0xffff) / (float)0xffff;
x++;
if(x >= _width) break; // In case of an odd amount of pixels
// Second pixel
pPixels[index+3] = (word1 >> 16) / (float)0xffff;
pPixels[index+4] = (word2 & 0xffff) / (float)0xffff;
pPixels[index+5] = (word2 >> 16) / (float)0xffff;
...

Why is the image's Hog generation values so low in outdoor scene?

I am attempting to find Pedestriants/People in images with the help of a cascade classifier which uses HOG as features.
The problem I'm trying to solve is in the initial stage, feature generation.
Where the HOG values in certain areas of the images are too low and hence the classifier fails.
The images below were captured using a Basler aca640-100gc Camera.
The visualization of the HOG was borrowed from the code in the webpage. Code also attached in the end of the question.
This first image here and its HOG is what I'm trying to achieve.
A realistic outdoor scene which can be used to generate features and hopefully find people. This is not what I have captured using my camera.
Captured Outdoor Images results
The images below are what I have created with the camera. I have tried all basic variations where I have played with the brightness and Focus But this still yeilds a poor result in an outdoor scene. Where I am inside the car and the camera is attached close to the windscreen.
But on the Contrary when the same camera was used to record indoor scene It works fine. Why it works when its in an indoor situtation and why not in an outdoor scene is something I can't understand.
Captured Indoor Images results
As seen in the images below same configuration works for an indoor scene.
Desired results
Ideally I would like results of the out door recordings to look like so.
Could anyone give me insight why this happens?
or How I can over come this issue to generate reliable HOGs for detection?
Code to visualize HOG
Mat img_raw = imread("C:\\testimg.png", 1); // load as color image
resize(img_raw, img_raw, Size(64,128) );
Mat img;
cvtColor(img_raw, img, CV_RGB2GRAY);
HOGDescriptor d;
// Size(128,64), //winSize
// Size(16,16), //blocksize
// Size(8,8), //blockStride,
// Size(8,8), //cellSize,
// 9, //nbins,
// 0, //derivAper,
// -1, //winSigma,
// 0, //histogramNormType,
// 0.2, //L2HysThresh,
// 0 //gammal correction,
// //nlevels=64
//);
// void HOGDescriptor::compute(const Mat& img, vector<float>& descriptors,
// Size winStride, Size padding,
// const vector<Point>& locations) const
vector<float> descriptorsValues;
vector<Point> locations;
d.compute( img, descriptorsValues, Size(8,8), Size(8,8), locations);
cout << "HOG descriptor size is " << d.getDescriptorSize() << endl;
cout << "img dimensions: " << img.cols << " width x " << img.rows << "height" << endl;
cout << "Found " << descriptorsValues.size() << " descriptor values" << endl;
cout << "Nr of locations specified : " << locations.size() << endl;
Mat get_hogdescriptor_visual_image(Mat& origImg,
vector<float>& descriptorValues,
Size winSize,
Size cellSize,
int scaleFactor,
double viz_factor)
{
Mat visual_image;
resize(origImg, visual_image, Size(origImg.cols*scaleFactor, origImg.rows*scaleFactor));
int gradientBinSize = 9;
// dividing 180° into 9 bins, how large (in rad) is one bin?
float radRangeForOneBin = 3.14/(float)gradientBinSize;
// prepare data structure: 9 orientation / gradient strenghts for each cell
int cells_in_x_dir = winSize.width / cellSize.width;
int cells_in_y_dir = winSize.height / cellSize.height;
int totalnrofcells = cells_in_x_dir * cells_in_y_dir;
float*** gradientStrengths = new float**[cells_in_y_dir];
int** cellUpdateCounter = new int*[cells_in_y_dir];
for (int y=0; y<cells_in_y_dir; y++)
{
gradientStrengths[y] = new float*[cells_in_x_dir];
cellUpdateCounter[y] = new int[cells_in_x_dir];
for (int x=0; x<cells_in_x_dir; x++)
{
gradientStrengths[y][x] = new float[gradientBinSize];
cellUpdateCounter[y][x] = 0;
for (int bin=0; bin<gradientBinSize; bin++)
gradientStrengths[y][x][bin] = 0.0;
}
}
// nr of blocks = nr of cells - 1
// since there is a new block on each cell (overlapping blocks!) but the last one
int blocks_in_x_dir = cells_in_x_dir - 1;
int blocks_in_y_dir = cells_in_y_dir - 1;
// compute gradient strengths per cell
int descriptorDataIdx = 0;
int cellx = 0;
int celly = 0;
for (int blockx=0; blockx<blocks_in_x_dir; blockx++)
{
for (int blocky=0; blocky<blocks_in_y_dir; blocky++)
{
// 4 cells per block ...
for (int cellNr=0; cellNr<4; cellNr++)
{
// compute corresponding cell nr
int cellx = blockx;
int celly = blocky;
if (cellNr==1) celly++;
if (cellNr==2) cellx++;
if (cellNr==3)
{
cellx++;
celly++;
}
for (int bin=0; bin<gradientBinSize; bin++)
{
float gradientStrength = descriptorValues[ descriptorDataIdx ];
descriptorDataIdx++;
gradientStrengths[celly][cellx][bin] += gradientStrength;
} // for (all bins)
// note: overlapping blocks lead to multiple updates of this sum!
// we therefore keep track how often a cell was updated,
// to compute average gradient strengths
cellUpdateCounter[celly][cellx]++;
} // for (all cells)
} // for (all block x pos)
} // for (all block y pos)
// compute average gradient strengths
for (int celly=0; celly<cells_in_y_dir; celly++)
{
for (int cellx=0; cellx<cells_in_x_dir; cellx++)
{
float NrUpdatesForThisCell = (float)cellUpdateCounter[celly][cellx];
// compute average gradient strenghts for each gradient bin direction
for (int bin=0; bin<gradientBinSize; bin++)
{
gradientStrengths[celly][cellx][bin] /= NrUpdatesForThisCell;
}
}
}
cout << "descriptorDataIdx = " << descriptorDataIdx << endl;
// draw cells
for (int celly=0; celly<cells_in_y_dir; celly++)
{
for (int cellx=0; cellx<cells_in_x_dir; cellx++)
{
int drawX = cellx * cellSize.width;
int drawY = celly * cellSize.height;
int mx = drawX + cellSize.width/2;
int my = drawY + cellSize.height/2;
rectangle(visual_image,
Point(drawX*scaleFactor,drawY*scaleFactor),
Point((drawX+cellSize.width)*scaleFactor,
(drawY+cellSize.height)*scaleFactor),
CV_RGB(100,100,100),
1);
// draw in each cell all 9 gradient strengths
for (int bin=0; bin<gradientBinSize; bin++)
{
float currentGradStrength = gradientStrengths[celly][cellx][bin];
// no line to draw?
if (currentGradStrength==0)
continue;
float currRad = bin * radRangeForOneBin + radRangeForOneBin/2;
float dirVecX = cos( currRad );
float dirVecY = sin( currRad );
float maxVecLen = cellSize.width/2;
float scale = viz_factor; // just a visual_imagealization scale,
// to see the lines better
// compute line coordinates
float x1 = mx - dirVecX * currentGradStrength * maxVecLen * scale;
float y1 = my - dirVecY * currentGradStrength * maxVecLen * scale;
float x2 = mx + dirVecX * currentGradStrength * maxVecLen * scale;
float y2 = my + dirVecY * currentGradStrength * maxVecLen * scale;
// draw gradient visual_imagealization
line(visual_image,
Point(x1*scaleFactor,y1*scaleFactor),
Point(x2*scaleFactor,y2*scaleFactor),
CV_RGB(0,0,255),
1);
} // for (all bins)
} // for (cellx)
} // for (celly)
// don't forget to free memory allocated by helper data structures!
for (int y=0; y<cells_in_y_dir; y++)
{
for (int x=0; x<cells_in_x_dir; x++)
{
delete[] gradientStrengths[y][x];
}
delete[] gradientStrengths[y];
delete[] cellUpdateCounter[y];
}
delete[] gradientStrengths;
delete[] cellUpdateCounter;
return visual_image;
}

Need floating point precision, GUI gui uses int

I have a flow layout. Inside it I have about 900 tables. Each table is stacked one on top of the other. I have a slider which resizes them and thus causes the flow layout to resize too.
The problem is, the tables should be linearly resizing. Their base size is 200x200. So when scale = 1.0, the w and h of the tables is 200.
I resize by a fixed amount each time making them 4% bigger each time. This means I would expect them to grow by 8 pixels each time. What happens is, every few resizes, the tables grow by 9 pixels. I use doubles everywhere. I have tried rounding, floor and ceil but the problem persists. What could I do so that they always grow by the correct amount?
void LobbyTableManager::changeTableScale( double scale )
{
setTableScale(scale);
}
void LobbyTableManager::setTableScale( double scale )
{
scale += 0.3;
scale *= 2.0;
std::cout << scale << std::endl;
agui::Gui* gotGui = getGui();
float scrollRel = m_vScroll->getRelativeValue();
setScale(scale);
rescaleTables();
resizeFlow();
...
double LobbyTableManager::getTableScale() const
{
return (getInnerWidth() / 700.0) * getScale();
}
void LobbyFilterManager::valueChanged( agui::Slider* source,int val )
{
if(source == m_magnifySlider)
{
DISPATCH_LOBBY_EVENT
{
(*it)->changeTableScale((double)val / source->getRange());
}
}
}
void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect )
{
int cx, cy, cw, ch;
g->getClippingRect(cx,cy,cw,ch);
g->setClippingRect(absRect.getX(),absRect.getY(),absRect.getWidth(),absRect.getHeight());
float scale = 0.35f;
int w = m_bgSprite->getWidth() * getTableScale() * scale;
int h = m_bgSprite->getHeight() * getTableScale() * scale;
int numX = ceil(absRect.getWidth() / (float)w) + 2;
int numY = ceil(absRect.getHeight() / (float)h) + 2;
float offsetX = m_activeTables[0]->getLocation().getX() - w;
float offsetY = m_activeTables[0]->getLocation().getY() - h;
int startY = childRect.getY() + 1;
if(moo)
{
std::cout << "TS: " << getTableScale() << " Scr: " << m_vScroll->getValue() << " LOC: " << childRect.getY() << " H: " << h << std::endl;
}
if(moo)
{
std::cout << "S=" << startY << ",";
}
int numAttempts = 0;
while(startY + h < absRect.getY() && numAttempts < 1000)
{
startY += h;
if(moo)
{
std::cout << startY << ",";
}
numAttempts++;
}
if(moo)
{
std::cout << "\n";
moo = false;
}
g->holdDrawing();
for(int i = 0; i < numX; ++i)
{
for(int j = 0; j < numY; ++j)
{
g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(),
absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0);
}
}
g->unholdDrawing();
g->setClippingRect(cx,cy,cw,ch);
}
void LobbyTable::rescale( double scale )
{
setScale(scale);
float os = getObjectScale();
double x = m_baseHeight * os;
if((int)(x + 0.5) > (int)x)
{
x++;
}
int oldH = getHeight();
setSize(m_baseWidth * os, floor(x));
...
I added the related code. The slider sends a value changed which is multiplied to get a 4 percent increase (or 8 percent if slider moves 2 values etc...) then the tables are rescaled with this.
The first 3 are when the table size increased by 9, the 4th time it increased by 8px. But the scale factor increases by 0.04 each time.
Why is the 4th time inconsistant?
the pattern seems like 8,8,8,9,9,9,8,8,8,9,9,9...
It increases by 1 pixel more for a few and then decreases by 1 ten increases by 1 etc, thats my issue...
I still don't see the "add 4%" code there (in a form I can understand, anyway), but from your description I think I see the problem: adding 4% twice is not adding 8%. It is adding 8.16% (1.04 * 1.04 == 1.0816). Do that a few more times and you'll start getting 9 pixel jumps. Do it a lot more times and your jumps will get much bigger (they will be 16 pixel jumps when the size gets up to 400x400). Which, IMHO is how I like my scaling to happen.