I am search for efficient algorithm and/or code to rasterize curved polygons. In best case such algorithm would support anti-aliasing with sub pixel accuracy. My goal is to understand and implement such algorithm in c++.
Point-In-Spline-Polygon Algorithm
Here is the code to detect if point is inside such curved polygon, and of course one could use brute-force to rasterize this, but it would be too slow.
Essentially I need fast c++ code to produce images presented in this article.
Brute force code would look something like this.
// draw a single pixel at x,y coordinates.
void draw_pixel(int x, int y, int coverage);
// slow code without anti-aliasing
void brute_force(int x_res, int y_res, double *poly) {
for(int y = 0; y < y_res; ++y) {
const double fy = double(y);
for(int x = 0; x < x_res; ++x) {
const double fx = double(x);
if (pointInSplinePoly(poly, fx, fy)) {
draw_pixel(x,y, 0xFF);
}
}
}
}
Here is slow code with 2x2 anti-aliasing.
Related
I am trying to convert a point cloud (x, y, z) data acquired from a Kinect V2 using libfreenect2, into a virtual 2D laser scan (e.g., a horizontal angle/distance vector).
I am currently assigning per pixel column, the PCL distance value, as shown below:
std::vector<float> scan(512, 0);
for (unsigned int row = 0; row < 424; ++row) {
for (unsigned int col = 0; col < 512; ++col) {
float x, y, z;
registration->getPointXYZ(depth, row, col, x, y, z);
if (std::isnan(x) || std::isnan(y) || std::isnan(z)) {
continue;
}
Eigen::Vector3f values = rotate_translate((-1 * x), y - 1.186, z);
if (scan[col] == 0) {
scan[col] = values[1];
}
if (values[1] < scan[col]) {
scan[col] = values[1];
}
}
}
You may ignore the rotate_translate method, it simply changes the local to global coordinates using the sensor pose.
The problem is best shown using the pictures below:
Whereas the LIDAR range sensor produces the following pointsmap:
the kinect 2D range scan is curved, and of course narrower, since the horizontal FOV is 70.6 degrees compared to the 270 degree range of the LIDAR.
It is this curvature that I am trying to fix; the SLAM/ICP library I'm using is mrpt and the actual data scan is inserted into an mrpt::obs::CObservation2DRangeScan observation:
auto obs = mrpt::obs::CObservation2DRangeScan();
obs.loadFromVectors(scan.size(), scan.data(), (char*)scan.data());
obs.aperture = mrpt::utils::DEG2RAD(70.6f);
obs.maxRange = 6.0;
obs.rightToLeft = true;
obs.timestamp = mrpt::system::now();
obs.setSensorPose(sensor);
I've searched around google and SO, and the only answers which seem to address this question, are this one and that one. So whereas I understand that the curvature is the result of me assigning each pixel column the PCL value, I am uncertain as to how I can use that to remove the curvature.
Each reply seems to take a different approach, and from what I understand the task is a linear interpolation of the angle per pixel ratio, and the current pixel coordinates?
The previous answered question doesn't seem to answer my problem "Blocky" Perlin noise
I tried to simplify the most I could to make my code readable and understandable.
I don't use the permutation table, instead I use the mt19937 generator.
I use SFML
using namespace std;
using namespace sf;
typedef Vector2f Vec2;
Sprite spr;
Texture tx;
// dot product
float prod(Vec2 a, Vec2 b) { return a.x*b.x + a.y*b.y; }
// linear interpolation
float interp(float start,float end,float coef){return coef*(end-start)+start;}
// get the noise of a certain pixel, giving its relative value vector in the square with [0.0 1.0] values
float getnoise(Vec2&A, Vec2&B, Vec2&C, Vec2&D, Vec2 rel){
float
dot_a=prod(A ,Vec2(rel.x ,rel.y)),
dot_b=prod(B ,Vec2(rel.x-1 ,rel.y)),
dot_c=prod(C ,Vec2(rel.x ,rel.y-1)),
dot_d=prod(D ,Vec2(rel.x-1 ,rel.y-1));
return interp
(interp(dot_a,dot_b,rel.x),interp(dot_c,dot_d,rel.x),rel.y);
// return interp
// (interp(da,db,rel.x),interp(dc,dd,rel.x),rel.y);
}
// calculate the [0.0 1.0] relative value of a pixel
Vec2 getrel(int i, int j, float cellsize){
return Vec2
(float
(i // which pixel
-(i/int(cellsize))//which cell
*cellsize)// floor() equivalent
/cellsize,// [0,1] range
float(j-(j/int(cellsize))*cellsize)/cellsize
);
}
// generates an array of random float values
vector<float> seeded_rand_float(unsigned int seed, int many){
vector<float> ret;
std::mt19937 rr;
std::uniform_real_distribution<float> dist(0, 1.0);
rr.seed(seed);
for(int j = 0 ; j < many; ++j)
ret.push_back(dist(rr));
return ret;
}
// use above function to generate an array of random vectors with [0.0 1.0] values
vector<Vec2>seeded_rand_vec2(unsigned int seed, int many){
auto coeffs1 = seeded_rand_float(seed, many*2);
// auto coeffs2 = seeded_rand_float(seed+1, many); //bad choice !
vector<Vec2> pushere;
for(int i = 0; i < many; ++i)
pushere.push_back(Vec2(coeffs1[2*i],coeffs1[2*i+1]));
// pushere.push_back(Vec2(coeffs1[i],coeffs2[i]));
return pushere;
}
// here we make the perlin noise
void make_perlin()
{
int seed = 43;
int pixels = 400; // how many pixels
int divisions = 10; // cell squares
float cellsize = float(pixels)/divisions; // size of a cell
auto randv = seeded_rand_vec2(seed,(divisions+1)*(divisions+1));
// makes the vectors be in [-1.0 1.0] range
for(auto&a:randv)
a = a*2.0f-Vec2(1.f,1.f);
Image img;
img.create(pixels,pixels,Color(0,0,0));
for(int j=0;j<=pixels;++j)
{
for(int i=0;i<=pixels;++i)
{
int ii = int(i/cellsize); // cell index
int jj = int(j/cellsize);
// those are the nearest gradient vectors for the current pixel
Vec2
A = randv[divisions*jj +ii],
B = randv[divisions*jj +ii+1],
C = randv[divisions*(jj+1) +ii],
D = randv[divisions*(jj+1) +ii+1];
float val = getnoise(A,B,C,D,getrel(i,j,cellsize));
val = 255.f*(.5f * val + .7f);
img.setPixel(i,j,Color(val,val,val));
}
}
tx.loadFromImage(img);
spr.setPosition(Vec2(10,10));
spr.setTexture(tx);
};
Here are the results, I included the resulted gradients vector (I multiplied them by cellsize/2).
My question is why are there white artifacts, you can somehow see the squares...
PS: it has been solved, I posted the fixed source here http://pastebin.com/XHEpV2UP
Don't make the mistake of applying a smooth interp on the result instead of the coefficient. Normalizing vectors or adding an offset to avoid zeroes doesn't seem to improve anything. Here is the colorized result:
The human eye is sensitive to discontinuities in the spatial derivative of luminance (brightness). The linear interpolation you're using here is sufficient to make brightness continuous, but it does not not make the derivative of the brightness continuous.
Perlin recommends using eased interpolation to get smoother results. You could use 3*t^2 - 2*t^3 (as suggested in the linked presentation) right in your interpolation function. That should solve the immediate issue.
That would look something like
// interpolation
float linear(float start,float end,float coef){return coef*(end-start)+start;}
float poly(float coef){return 3*coef*coef - 2*coef*coef*coef;}
float interp(float start,float end,float coef){return linear(start, end, poly(coef));}
But note that evaluating a polynomial for every interpolation is needlessly expensive. Usually (including here) this noise is being evaluated over a grid of pixels, with squares being some integer (or rational) number of pixels large; this means that rel.x, rel.y, rel.x-1, and rel.y-1 are quantized to particular possible values. You can make a lookup table for values of the polynomial ahead of time at those values, replacing the "poly" function in the code snippet provided. This technique lets you use even smoother (e.g. degree 5) easing functions at very little additional cost.
Although Jerry is correct in his above answer (I would have simply commented above, but I'm still pretty new to StackOverflow and I have insufficient reputation to comment at the moment)...
And his solution of using:
(3*coef*coef) - (2*coef*coef*coef)
to smooth/curve the interpolation factor works.
The slightly better solution is to simplify the equation to:
(3 - (2*coef)) * coef*coef
the resulting curve is virtually identical (there are slight differences, but they are tiny), and there's 2 less multiplications (and still only a single subtraction) to do per interpolation. Resulting in less computational effort.
This reduction in computation could really add up over time, especially when using the noise function alot. For instance, if you start generating noise in more than 2 dimensions.
I'm currently working on optical flow with OpenCV C++. I'm using calcOpticalFlowPyrLK with a grid of point (= one interest point for each 5*5 pixels square).
Which is the best way to :
1) Compute the histogram of the computed values (orientation and distance) for each frame
2) Compute an histogram of the values (orientation and distance) that a given pixel took during several frames (for instance 100)
Are the functions of OpenCV adapted for this work ? How may I use them in a simple way in combination with calcOpticalFlowPyrLK ?
I was searching for the same OpenCV tools a couple of months ago. Unfortunately, OpenCV does not include any Motion Histogram implementation. Instead, what you should have to do is to run calcOpticalFlowPyrLK for each frame and calculate the orientation/length of each displacement. Then, you have to create/fill the histograms yourself . Not as hard as it sounds, believe me :)
The OpenCV implementation for the fist part of HOOF can be like below:
const int rows = flow1.rows;
const int cols = flow1.cols;
for (int y = 0; y < rows; ++y)
for (int x = 0; x < cols; ++x)
{
Vec2f flow1_at_point = flow1.at<Vec2f>(y, x);
float u1 = flow1_at_point[0];
float v1 = flow1_at_point[1];
magnitudeImage += sqrt((u1*u1) + (v1 + v1));
orientationImage += atan2(u1, v1);
}
In my current program I would like to be able to draw "DNA Shapes". I have written a "DrawPixel(x,y,r,g,b)" function which can draw a pixel on the screen. Moving on from this point I implemented the Bresenham line algorithm to draw a line as: "DrawLine(x1,y1,x2,y2,r,g,b)".
Now I realized that using an image for the DNA shapes would be a very bad choice (in multiple aspects). So I tried making a function to draw a DNA shape (As I couldn't find an algorithm). This is currently based on a circle drawing algorithm (Midpoint Circle Algorithm):
void D3DGHandler::DrawDNAShape(int x1, int y1, int length, int curves, int dir
int off, int r, int g, int b){
int x2Pos = sin(dir)*length+x1;
int y2Pos = cos(dir)*length+y1;
for (int i = 0; i < curves; i++) {
int xIncrease = (x2Pos / curves) * i;
int yIncrease = (y2Pos / curves) * i;
int rSquared = off * off;
int xPivot = (int)(off * 0.707107 + 0.5f);
for (int x = 0; x <= xPivot; x++) {
int y = (int)(sqrt((float)(rSquared - x*x)) + 0.5f);
DrawPixel(x1+x+xIncrease,y1+y+yIncrease,r,g,b);
DrawPixel(x1-x+xIncrease,y1+y+yIncrease,r,g,b);
DrawPixel(x1+x+xIncrease,y1-y+yIncrease,r,g,b);
DrawPixel(x1-x+xIncrease,y1-y+yIncrease,r,g,b);
DrawPixel(x1+y+xIncrease,y1+x+yIncrease,r,g,b);
DrawPixel(x1-y+xIncrease,y1+x+yIncrease,r,g,b);
DrawPixel(x1+y+xIncrease,y1-x+yIncrease,r,g,b);
DrawPixel(x1-y+xIncrease,y1-x+yIncrease,r,g,b);
}
}
}
This implementation is currently getting me some completely new functionality I was not looking for.
Along the lines of:
I would be very happy to hear any information you can give me!
Update
Expected result:
Expected Result
But then way more line-like.
You can draw two sinusoidal graphs with some offset will give you the required shape.
Eg. In R
x=(1:100)/10.0
plot(sin(x),x)
points(sin(x+2.5),x)
I'm using the water flow algorithm I got from
http://www.gamedev.net/reference/articles/article915.asp
It seems like when I create a wave, it start out as a circle. As time passes the circle dissipates and has lines at 45, -45 45 and -45 degree angles. So it no longer looks like a wave, but a square-like shape.
This is the code to start the wave:
WaveMapPingc(int x,int y,int rd, int str)
{
for(float a=0; a<3.14159*2; a+=.1)
{
for(int r=1; r<rd;r++)
{
// LowWaveMapPingca(x+(int)((float)r*cos(a)+.5) ,y+(int)((float)r*sin(a)+.5),str );
WaveMap[CT]
[x+(int)((float)r*cos(a)+.5)]
[y+(int)((float)r*sin(a)+.5)]=str;
}
}
}
This is the code that generates the height map:
UpdateWaveMap()
{
int x,y,n;
int Temporary_Value = CT;
CT = NW;
NW = Temporary_Value;
// { Skip the edges to allow area sampling }
for(y= 1; y<MAXY-1; y++)
{
for(x= 1;x< MAXX-1; x++)
{
n = ( WaveMap[CT][x-1][y] +
WaveMap[CT][x+1][y] +
WaveMap[CT][x][y-1] +
WaveMap[CT][x][y+1] ) / 2 -
WaveMap[NW][x][y];
float sub=(float)n / DAMP;
int isub=sub;
if (n<1 && isub==0)
n++;
else
if (n>1 && isub==0)
n--;
else
n = n-isub;
WaveMap[NW][x][y] = n;
} // x
} // y
} // function
You're using integers for the positions. This introduces all sorts of quantization problems ( for the vertices positions, that is.)
I'd say you likely want to keep it in floating point to make things smooth.
I know this effect from my very first ripple simulation. Solution: Use a more detailed convolution kernel. You may also want to add a damping factor, to dissipate the ripples, so no numerical errors accumulate into a runaway integration. This is the relevant part from one of my early game engines (wrote that one 10 years ago):
*value_at(Next, x, y)=(
(
(*value_at(Current, x-1, y-1)+
*value_at(Current, x+1, y-1)+
*value_at(Current, x+1, y+1)+
*value_at(Current, x-1, y+1))+
(*value_at(Current, x-1, y)+
*value_at(Current, x+1, y)+
*value_at(Current, x, y-1)+
*value_at(Current, x, y+1))
)/4
-( *value_at(Previous, x, y) )
)*0.98;
Effectively this is computing a discrete wave function using a convolution. The higher the resolution of the kernel, the better the quality. Using SIMD optimization really helps here. However since a convolution is simple multiplication in fourier space, it's a very good idea to do this thing using a fourier method if you're interested in high quality ripples and waves. Using some tweaking one can produce also nice gravity waves (not to be confused with those from general relativity) http://en.wikipedia.org/wiki/Gravity_wave