Getting bad results with distance estimator for Julia set - c++

I've been working on drawing the Julia set using a distance estimator instead of the normalized iteration count. I usually use the code below and play around with the iteration count until I get a decent enough picture
double Mandelbrot::getJulia(double x, double y)
{
complex<double> z(x, y);
complex<double> c(-0.7269, 0.1889);
double iterations = 0;
while (iterations < MAX)
{
z = z * z + c;
if (abs(z) > 2) {
return iterations + 1.0 - log(log2(abs(z)));
break;
}
iterations++;
}
return double(MAX);
}
I then call this for each point and draw to a bitmap;
ZoomTool zt(WIDTH, HEIGHT);
zt.add(Zoom(WIDTH / 2, HEIGHT / 2, 4.0 / WIDTH));
for (int y = 0; y < HEIGHT; y++) {
for (int x = 0; x < WIDTH; x++) {
pair<double, double> coords = zt.zoomIn(x, y);
double iterations = Mandelbrot::getJulia(coords.first,
coords.second);
double ratio = iterations / Mandelbrot::MAX;
double h = 0;
double s= 0;
double v = 0;
if (ratio != 1)
{
h = 360.0*ratio;
s = 1.0;
v = 1.0;
}
HSV hsv(h, s, v);
RGB rgb(0, 0, 0);
rgb = toRGB(hsv);
bitmap.setPixel(x, y, rgb._r, rgb._g, rgb._b);
}
}
At 600 iterations, I get this;
Which is not great but better than what I get with the distance estimator which I am attempting to now use. I've implemented the distance estimator as below;
double Mandelbrot::getJulia(double x, double y)
{
complex<double> z(x,y);
complex<double> c(-0.7269, 0.1889);
complex<double> dz = 0;
double iterations = 0;
while (iterations < MAX)
{
dz = 2.0 * dz * z + 1.0;
z = z * z + c;
if (abs(z) > 2)
{
return abs(z) * log(abs(z)) / abs(dz);
}
iterations++;
}
return Mandelbrot::MAX;
}
At 600 iterations, I get the following image
Am I not normalizing the colors correctly? I'm guessing this is happening because I'm normalizing to 360.0 and doing a conversion from HSV to RGB. Since the distances are quite small, I get a very condensed distribution of colors.

Related

Why does using double instead of float gives a wrong result in this double integration code?

I found the following code in this page to compute a double integral. Whenever I run it with all variables being declared as float, it gives the right result for the example integral, which is 3.91905. However, if I just change all float variables to double, the program gives a completely wrong result (2.461486) for this integral.
Could you help me undestanding why this happens? I expected to have a better result using double precision, but that's evidently not the case here.
Below is the code pasted from the aforementioned website.
// C++ program to calculate
// double integral value
#include <bits/stdc++.h>
using namespace std;
// Change the function according to your need
float givenFunction(float x, float y)
{
return pow(pow(x, 4) + pow(y, 5), 0.5);
}
// Function to find the double integral value
float doubleIntegral(float h, float k,
float lx, float ux,
float ly, float uy)
{
int nx, ny;
// z stores the table
// ax[] stores the integral wrt y
// for all x points considered
float z[50][50], ax[50], answer;
// Calculating the number of points
// in x and y integral
nx = (ux - lx) / h + 1;
ny = (uy - ly) / k + 1;
// Calculating the values of the table
for (int i = 0; i < nx; ++i) {
for (int j = 0; j < ny; ++j) {
z[i][j] = givenFunction(lx + i * h,
ly + j * k);
}
}
// Calculating the integral value
// wrt y at each point for x
for (int i = 0; i < nx; ++i) {
ax[i] = 0;
for (int j = 0; j < ny; ++j) {
if (j == 0 || j == ny - 1)
ax[i] += z[i][j];
else if (j % 2 == 0)
ax[i] += 2 * z[i][j];
else
ax[i] += 4 * z[i][j];
}
ax[i] *= (k / 3);
}
answer = 0;
// Calculating the final integral value
// using the integral obtained in the above step
for (int i = 0; i < nx; ++i) {
if (i == 0 || i == nx - 1)
answer += ax[i];
else if (i % 2 == 0)
answer += 2 * ax[i];
else
answer += 4 * ax[i];
}
answer *= (h / 3);
return answer;
}
// Driver Code
int main()
{
// lx and ux are upper and lower limit of x integral
// ly and uy are upper and lower limit of y integral
// h is the step size for integration wrt x
// k is the step size for integration wrt y
float h, k, lx, ux, ly, uy;
lx = 2.3, ux = 2.5, ly = 3.7,
uy = 4.3, h = 0.1, k = 0.15;
printf("%f", doubleIntegral(h, k, lx, ux, ly, uy));
return 0;
}
Thanks in advance for your help!
Due to numeric imprecisions, this line:
ny = (uy - ly) / k + 1; // 'ny' is an int.
Evaluates to 5 when the types of uy, ly and k are float. When the type is double, it yields 4.
You may use std::round((uy - ly) / k) or a different formula (I haven't checked the mathematical correctness of the whole program).

Converting Cartesian Coordinates To Polar Coordinates

I'm trying to covert an image from Polar Coordinates to Cartesian Coordinates but after applying the formulas I get float coordinates (r and teta) and I don't know how to represent the points in space using floats for x and y. There might be a way of transforming them in int numbers and still preserving the distribution but I don't see how. I know that there are functions in OpenCV like warpPolar that to the work but I would like to implement it by myself. Any ideas would help :)
This is my code:
struct Value
{
double r;
double teta;
int value; // pixel intensity
};
void irisNormalization(Mat img, Circle pupilCircle, Circle irisCircle, int &matrixWidth, int &matrixHeight)
{
int w = img.size().width;
int h = img.size().height;
int X, Y;
double r, teta;
int rayOfIris = irisCircle.getRay();
std::vector<Value> polar;
// consider the rectangle the iris circle is confined in
int xstart = irisCircle.getA() - rayOfIris;
int ystart = irisCircle.getB() - rayOfIris;
int xfinish = irisCircle.getA() + rayOfIris;
int yfinish = irisCircle.getB() + rayOfIris;
for (int x = xstart; x < xfinish; x++)
for (int y = ystart; y < yfinish; y++)
{
X = x - xstart - rayOfIris;
Y = y - ystart - rayOfIris;
r = sqrt(X * X + Y * Y);
if (X != 0)
{
teta = (atan(abs(Y / X)) * double(180 / M_PI));
if (X > 0 && Y > 0) // quadrant 1
teta = teta;
if (X > 0 && Y < 0)
teta = 360 - teta; // quadrant 4
if (X < 0 && Y > 0) // quadrant 2
teta = 180 - teta;
if (X < 0 && Y < 0) // quadrant 3
teta = 180 + teta;
if (r < rayOfIris)
{
polar.push_back({ r, teta, int(((Scalar)(img.at<uchar>(Point(x, y)))).val[0]) });
}
}
}
std::sort(polar.begin(), polar.end(), [](const Value &left, const Value &right) {
return left.r < right.r && left.teta < right.teta;
});
for (std::vector<Value>::const_iterator i = polar.begin(); i != polar.end(); ++i)
std::cout << i->r << ' ' << i->teta << endl;
Your implementation attempts to express every integer-coordinate point inside a given circle in polar-coordinates. In this way, however, you terminate with an array of coordinates toghether with a value.
If instead you want to geometrically transform your image, you should:
create the destination image with proper width (rho resolution) and height (theta resolution);
loop through every pixel of the destination image and map it back into the original image with the inverse transformation;
get the value of the back-transformed point into the original image by eventually interpolating near values.
For interpolating the values different methods are available. A non-exhaustive list includes:
nearest-neighbor interpolation;
bilinear interpolation;
bicubic interpolation.

How to create a Gaussian kernel of arbitrary width?

How to create a Gaussian kernel by only specifying its width w (3,5,7,9...), and without specifying its variance sigma?
In other word, how to adapt sigma so that the Gaussian distribution 'fits well' w?
I would be interested in a C++ implementation:
void create_gaussian_kernel(int w, std::vector<std::vector<float>>& kernel)
{
kernel = std::vector<std::vector<float>>(w, std::vector<float>(w, 0.f)); // 2D array of size w x w
const Scalar sigma = 1.0; // how to adapt sigma to w ???
const int hw = (w-1)/2; // half width
for(int di = -hw; di <= +hw; ++di)
{
const int i = hw + di;
for(int dj = -hw; dj <= +hw; ++dj)
{
const int j = hw + dj;
kernel[i][j] = gauss2D(di, dj, sigma);
}
}
}
Everything I see on the Internet use a fixed size w and a fixed variance sigma :
geeksforgeeks.org/gaussian-filter-generation-c/
tutorialspoint.com/gaussian-filter-generation-in-cplusplus
stackoverflow.com/a/8204880/5317819
stackoverflow.com/q/42186498/5317819
stackoverflow.com/a/54615770/5317819
I found a simple (arbitrary) relation between sigma and w.
I want the next value outside the kernel (along one axis) below a very small value epsilon:
exp( - (half_width + 1)^2 / (2 * sigma^2) ) < epsilon
with half_width the kernel 'half width'.
The result is
sigma^2 = - (half_width + 1)^2 / (2 * log(epsilon))
I use the following c++ code:
#include <vector>
#include <cmath>
#include <cassert>
using Matrix = std::vector<std::vector<float>>;
// compute sigma^2 that 'fit' the kernel half width
float compute_squared_variance(int half_width, float epsilon = 0.001)
{
assert(0 < epsilon && epsilon < 1); // small value required
return - (half_width + 1.0) * (half_width + 1.0) / 2.0 / std::log(epsilon);
}
float gaussian_exp(float y, float x, float sigma2)
{
assert(0 < sigma2);
return std::exp( - (x*x + y*y) / (2 * sigma2) );
}
// create a Gaussian kernel of size 2*half_width+1 x 2*half_width+1
Matrix make_gaussian_kernel(int half_width)
{
if(half_width <= 0)
{
// kernel of size 1 x 1
Matrix kernel(1, std::vector<float>(1, 1.0));
return kernel;
}
Matrix kernel(2*half_width+1, std::vector<float>(2*half_width+1, 0.0));
const float sigma2 = compute_squared_variance(half_width, 0.1);
float sum = 0;
for(int di = -half_width; di <= +half_width; ++di)
{
const int i = half_width + di;
for(int dj = -half_width; dj <= +half_width; ++dj)
{
const int j = half_width + dj;
kernel[i][j] = gaussian_exp(di, dj, sigma2);
sum += kernel[i][j];
}
}
assert(0 < sum);
// normalize
for(int i=0; i<2*half_width+1; ++i)
{
for(int j=0; j<2*half_width+1; ++j)
{
kernel[i][j] /= sum;
}
}
return kernel;
}

Perlin Noise 2D: turning static into clouds

I am trying to wrap my head around Perlin noise.
This article has helped and I have been trying to recreate the cloud type images that it provides.
My noise code is as follows:
#include "terrain_generator.hpp"
using namespace std;
#define PI 3.1415927;
float noise(int x, int y)
{
int n = x + y * 57;
n = (n<<13) ^ n;
return (1.0 - ( (n * ((n * n * 15731) + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0);
}
float cosine_interpolate(float a, float b, float x)
{
float ft = x * PI;
float f = (1 - cos(ft)) * 0.5;
float result = a*(1-f) + b*f;
return result;
}
float smooth_noise_2D(float x, float y)
{
float corners = ( noise(x-1, y-1)+noise(x+1, y-1)+noise(x-1, y+1)+noise(x+1, y+1) ) / 16;
float sides = ( noise(x-1, y) +noise(x+1, y) +noise(x, y-1) +noise(x, y+1) ) / 8;
float center = noise(x, y) / 4;
return corners + sides + center;
}
float interpolated_noise(float x, float y)
{
int x_whole = (int) x;
float x_frac = x - x_whole;
int y_whole = (int) y;
float y_frac = y - y_whole;
float v1 = smooth_noise_2D(x_whole, y_whole);
float v2 = smooth_noise_2D(x_whole, y_whole+1);
float v3 = smooth_noise_2D(x_whole+1, y_whole);
float v4 = smooth_noise_2D(x_whole+1, y_whole+1);
float i1 = cosine_interpolate(v1,v3,x_frac);
float i2 = cosine_interpolate(v2,v4,x_frac);
return cosine_interpolate(i1, i2, y_frac);
}
float perlin_noise_2D(float x, float y)
{
int octaves=5;
float persistence=0.5;
float total = 0;
for(int i=0; i<octaves-1; i++)
{
float frequency = pow(2,i);
float amplitude = pow(persistence,i);
total = total + interpolated_noise(x * frequency, y * frequency) * amplitude;
}
return total;
}
To actually implement the algorithm, I am trying to make the clouds he depicted in the article.
I am using openGL and I am making my own texture and pasting it onto a quad that covers the screen. That is irrelevant though. In the code below, just know that the set pixel function works correctly and that its parameters are (x, y, red, green, blue).
This is essentially my draw loop:
for(int y=0; y<texture_height; y++)
{
for(int x=0; x<texture_width; x++)
{
seed2+=1;
float Val=perlin_noise_2D(x,y);
Val = Val/2.0;
Val = (Val + 1.0) / 2.0;
setPixel(x,y,Val,Val,Val);
}
}
What I get is the following:
How can I manipulate my algorithm to achieve the effect I am looking for? changing the persistence or number of octaves doesn't seem to do much at all.
As your result looks almost like white noise, your samples are probably too far apart within the perlin noise. Try using something smaller than the pixel coordinates to evaluate the noise at.
Something similar to this:
perlin_noise_2D((float)x/texture_width,(float)y/texture_height);

calculate xSteps and ySteps to get N amount of points in circle

I try to fill a grid with points and only keep the points inside a imaginary circle.
First i did this with:
createColorDetectionPoints(int xSteps, int ySteps)
But for me it's a lot easier to set it with a target in mind:
void ofxDTangibleFinder::createColorDetectionPoints(int nTargetPoints)
The target doesn't have to be too precise. But at the moment when i want 1000 points for example i get 2289 points.
I think my logic is wrong but i can't figure it out.
The idea is to get the right amount of xSteps and ySteps.
Can someone help?
void ofxDTangibleFinder::createColorDetectionPoints(int nTargetPoints) {
colorDetectionVecs.clear();
// xSteps and ySteps needs to be calculated
// the ratio between a rect and ellipse is
// 0.7853982
int xSteps = sqrt(nTargetPoints);
xSteps *= 1.7853982; // make it bigger in proportion to the ratio
int ySteps = xSteps;
float centerX = (float)xSteps/2;
float centerY = (float)ySteps/2;
float fX, fY, d;
float maxDistSquared = 0.5*0.5;
for (int y = 0; y < ySteps; y++) {
for (int x = 0; x < xSteps; x++) {
fX = x;
fY = y;
// normalize
fX /= xSteps-1;
fY /= ySteps-1;
d = ofDistSquared(fX, fY, 0.5, 0.5);
if(d <= maxDistSquared) {
colorDetectionVecs.push_back(ofVec2f(fX, fY));
}
}
}
// for(int i = 0; i < colorDetectionVecs.size(); i++) {
// printf("ellipse(%f, %f, 1, 1);\n", colorDetectionVecs[i].x*100, colorDetectionVecs[i].y*100);
// }
printf("colorDetectionVecs: %lu\n", colorDetectionVecs.size());
}