Show more points then resolution - c++

I am drawing a graph with 2000+ points to a pdf file. The resolution of the pdf is 612 x 792. I can only draw 612 points to the pdf because the width is 612. I am mapping 1 point to 1 pixel. How can I plot all 2000+ samples to the pdf. I am using this lib http://www.vulcanware.com/cpp_pdf/index.html.

Option 1: Scale the points, using x = (x * 612) / 2000. This will mean that if 2 points are close to each other (including "similar y") they will overwrite each other.
Option 2: Treat each point as a square; and calculate floating point values for the "left edge x" and "right edge x" that have been scaled (left_x = ( (x-width/2.0) * 612.0) / 2000.0; right_x = ( (x+width/2.0) * 612.0) / 2000.0;), and draw the square using anti-aliasing, by calculating "area of destination pixel that the square overlaps" for each destination pixel that overlaps with the square. In this case you will need to do "dest_pixel = max(dest_pixel + area, 1);" to clamp pixel values when squares overlap.
Option 3: Rotate the whole thing 90 degrees so that "x axis" goes vertically down the page (and can be split across multiple pages if necessary); and if this causes a problem for y then use one of the options above for y.
Note that "option 2" can be done in both (vertical and horizontal) directions at the same time. To do this, start by determining the edges of the square, like:
left_x = point_x / MAX_SRC_X * MAX_DEST_X;
right_x = (point_x + 1) / MAX_SRC_X * MAX_DEST_X;
top_y = point_y / MAX_SRC_Y * MAX_DEST_Y;
bottom_y = (point_y + 1) / MAX_SRC_Y * MAX_DEST_Y;
Then have a "for each row that is effected" loop that calculates how much each row is effected, like:
for(int y = top_y; y < bottom_y; y++) {
row_top = fmax(y, top_y);
row_bottom = fmin(y+1, bottom_y);
row_weight = row_bottom - row_top;
Then have a similar "for each column that is effected" loop, like:
for(int x = left_x; x < right_x; x++) {
column_left = fmax(x, left_x);
column_right = fmin(x+1, right_x);
column_weight = column_right - column_left;
Then calculate the area for the pixel, set the pixel, and complete the loops:
dest_pixel_area = row_weight * column_weight;
pixel[y][x].red = min(pixel[y][x].red + dest_pixel_area * red, MAX_RED);
pixel[y][x].green = min(pixel[y][x].green + dest_pixel_area * green, MAX_GREEN);
pixel[y][x].blue = min(pixel[y][x].blue + dest_pixel_area * blue, MAX_BLUE);
}
}
Note: All code above is untested and simplified. It can be faster to break the loops up into "first line/column; loop for middle area only; then last line/column" to remove most of the fmin/fmax.
If you only need to do this in one direction, delete the parts for the direction you don't need and use 1.0 for the corresponding row_weight or column_weight.

Related

How can I draw smooth pixels in a netpbm image?

I just wrote a small netpbm parser and I am having fun with it, drawing mostly parametric equations. They look OK for a first time thing, but how can I expand upon this and have something that looks legit? This picture is how my method recreated the Arctic Monkeys logo which was just
0.5[cos(19t) - cos(21t)]
(I was trying to plot both cosines first before superpositioning them)
It obviously looks very "crispy" and sharp. I used as small of a step size as I could without it taking forever to finish. (0.0005, takes < 5 sec)
The only idea I had was that when drawing a white pixel, I should also draw its immediate neighbors with some slightly lighter gray. And then draw the neighbors of THOSE pixels with even lighter gray. Almost like the white color is "dissolving" or "dissipating".
I didn't try to implement this because it felt like a really bad way to do it and I am not even sure it'd produce anything near the desirable effect so I thought I'd ask first.
EDIT: here's a sample code that draws just a small spiral
the draw loop:
for (int t = 0; t < 6 * M_PI; t += 0.0005)
{
double r = t;
new_x = 10 * r * cosf(0.1 * M_PI * t);
new_y = -10 * r * sinf(0.1 * M_PI * t);
img.SetPixel(new_x + img.img_width / 2, new_y + img.img_height / 2, 255);
}
//img is a PPM image with magic number P5 (binary grayscale)
SetPixel:
void PPMimage::SetPixel(const uint16_t x, const uint16_t y, const uint16_t pixelVal)
{
assert(pixelVal >= 0 && pixelVal <= max_greys && "pixelVal larger than image's maximum max_grey\n%d");
assert(x >= 0 && x < img_width && "X value larger than image width\n");
assert(y >= 0 && y < img_height && "Y value larger than image height\n");
img_raster[y * img_width + x] = pixelVal;
}
This is what this code produces
A very basic form of antialiasing for a scatter plot (made of points rather than lines) can be achieved by applying something like stochastic rounding: consider the brush to be a pixel-sized square (but note the severe limitations of this model), centered at the non-integer coordinates of the plotted point, and compute its overlap with the four pixels that share the corner closest to that point. Treat that overlap fraction as a grayscale fraction and set each pixel to the largest value for a large number of points approximating a line, or do alpha blending for a small number of discrete points.

How to draw oblique hemisphere along a certain volume?

What I am trying to do:
Make an empty 3D image (.dcm in this case) with image direction as
[1,0,0;
0,1,0;
0,0,1].
In this image, I insert an oblique trajectory, which essentially represents a cuboid. Now I wish to insert a hollow hemisphere in this cuboid (cuboid with all white pixels - constant value, hemisphere can be anything but differentiable), so that it is aligned along the axis of the trajectory.
What I am getting
So I used the general formula for a sphere:
x = x0 + r*cos(theta)*sin(alpha)
y = y0 + r*sin(theta)*sin(alpha)
z = z0 + r*cos(alpha)
where, 0 <= theta <= 2 * pi, 0 <= alpha <= pi / 2, for hemisphere.
What I tried to achieve this
So first I thought to just get the rotation matrix, between the image coordinate system and the trajectory coordinate system and multiply all points on the sphere with it. This didn't give me desired results as the rotated sphere was scaled and translated. I don't get why this was happening as I checked the points myself.
Then I thought why not make a hemisphere out of a sphere, which is cut at by a plane lying parallel to the y,z plane of the trajectory coordinate system. For this, I calculated the angle between x,y and z axes of the image with that of the trajectory. Then, I started to get hemisphere coordinates for theta_rotated and alpha_rotated. This didn't work either as instead of a hemisphere, I was getting a rather weird sphere.
This is without any transformations
This is with the angle transformation (second try)
For reference,
The trajectory coordinate system :
[-0.4744, -0.0358506, -0.8553;
-0.7049, 0.613244, 0.3892;
-0.5273, -0.787537, 0.342;];
which gives angles:
x_axis angle 2.06508 pi
y_axis angle 2.2319 pi
z_axis angle 1.22175 pi
Code to generate the cuboid
Vector3d getTrajectoryPoints(std::vector<Vector3d> &trajectoryPoints, Vector3d &target1, Vector3d &tangent1){
double distanceFromTarget = 10;
int targetShift = 4;
target -= z_vector;
target -= (tangent * targetShift);
Vector3d vector_x = -tangent;
y_vector = z_vector.cross(vector_x);
target -= y_vector;
Vector3d start = target - vector_x * distanceFromTarget;
std::cout << "target = " << target << "start = " << start << std::endl;
std::cout << "x " << vector_x << " y " << y_vector << " z " << z_vector << std::endl;
double height = 0.4;
while (height <= 1.6)
{
double width = 0.4;
while (width <= 1.6){
distanceFromTarget = 10;
while (distanceFromTarget >= 0){
Vector3d point = target + tangent * distanceFromTarget;
//std::cout << (point + (z_vector*height) - (y_vector * width)) << std::endl;
trajectoryPoints.push_back(point + (z_vector * height) + (y_vector * width));
distanceFromTarget -= 0.09;
}
width += 0.09;
}
height += 0.09;
}
}
The height and width as incremented with respect to voxel spacing.
Do you guys know how to achieve this and what am I doing wrong? Kindly let me know if you need any other info.
EDIT 1
After the answer from #Dzenan, I tried the following:
target = { -14.0783, -109.8260, -136.2490 }, tangent = { 0.4744, 0.7049, 0.5273 };
typedef itk::Euler3DTransform<double> TransformType;
TransformType::Pointer transform = TransformType::New();
double centerForTransformation[3];
const double pi = std::acos(-1);
try{
transform->SetRotation(2.0658*pi, 1.22175*pi, 2.2319*pi);
// transform->SetMatrix(transformMatrix);
}
catch (itk::ExceptionObject &excp){
std::cout << "Exception caught ! " << excp << std::endl;
transform->SetIdentity();
}
transform->SetCenter(centerForTransformation);
Then I loop over all the points in the hemisphere and transform them using,
point = transform->TransformPoint(point);
Although, I'd prefer to give the matrix which is equal to the trajectory coordinate system (mentioned above), the matrix isn't orthogonal and itk wouldn't take it. It must be said that I used the same matrix for resampling this image and extracting the cuboid and this was fine. Thence, I found the angles between x_image - x_trajectory, y_image - y_trajectory and z_image - z_trajectory and used SetRotation instead which gives me the following result (still incorrect):
EDIT 2
I tried to get the sphere coordinates without actually using the polar coordinates. Following discussion with #jodag, this is what I came up with:
Vector3d center = { -14.0783, -109.8260, -136.2490 };
height = 0.4;
while (height <= 1.6)
{
double width = 0.4;
while (width <= 1.6){
distanceFromTarget = 5;
while (distanceFromTarget >= 0){
// Make sure the point lies along the cuboid direction vectors
Vector3d point = center + tangent * distanceFromTarget + (z_vector * height) + (y_vector * width);
double x = std::sqrt((point[0] - center[0]) * (point[0] - center[0]) + (point[1] - center[1]) * (point[1] - center[1]) + (point[2] - center[2]) * (point[2] - center[2]));
if ((x <= 0.5) && (point[2] >= -136.2490 ))
orientation.push_back(point);
distanceFromTarget -= 0.09;
}
width += 0.09;
}
height += 0.09;
}
But this doesn't seem to work either.
This is the output
I'm a little confused about your first plot because it appears that the points being displayed are not defined in the image coordinates. The example I'm posting below assumes that voxels must be part of the image coordinate system.
The code below transforms the voxel coordinates in the image space into the trajectory space by using an inverse transformation. It then rasterises a 2x2x2 cube centered around 0,0,0 and a 0.9 radius hemisphere sliced along the xy axis.
Rather than continuing a long discussion in the comments I've decided to post this. Please comment if you're looking for something different.
% define trajectory coordinate matrix
R = [-0.4744, -0.0358506, -0.8553;
-0.7049, 0.613244, 0.3892;
-0.5273, -0.787537, 0.342]
% initialize 50x50x50 3d image
[x,y,z] = meshgrid(linspace(-2,2,50));
sz = size(x);
x = reshape(x,1,[]);
y = reshape(y,1,[]);
z = reshape(z,1,[]);
r = ones(size(x));
g = ones(size(x));
b = ones(size(x));
blue = [0,1,0];
green = [0,0,1];
% transform image coordinates to trajectory coordinates
vtraj = R\[x;y;z];
xtraj = vtraj(1,:);
ytraj = vtraj(2,:);
ztraj = vtraj(3,:);
% rasterize 2x2x2 cube in trajectory coordinates
idx = (xtraj <= 1 & xtraj >= -1 & ytraj <= 1 & ytraj >= -1 & ztraj <= 1 & ztraj >= -1);
r(idx) = blue(1);
g(idx) = blue(2);
b(idx) = blue(3);
% rasterize radius 0.9 hemisphere in trajectory coordinates
idx = (sqrt(xtraj.^2 + ytraj.^2 + ztraj.^2) <= 0.9) & (ztraj >= 0);
r(idx) = green(1);
g(idx) = green(2);
b(idx) = green(3);
% plot just the blue and green voxels
green_idx = (r == green(1) & g == green(2) & b == green(3));
blue_idx = (r == blue(1) & g == blue(2) & b == blue(3));
figure(1); clf(1);
plot3(x(green_idx),y(green_idx),z(green_idx),' *g')
hold('on');
plot3(x(blue_idx),y(blue_idx),z(blue_idx),' *b')
axis([1,100,1,100,1,100]);
axis('equal');
axis('vis3d');
You can generate you hemisphere in some physical space, then transform it (translate and rotate) by using e.g. RigidTransform's TransformPoint method. Then use TransformPhysicalPointToIndex method in itk::Image. Finally, use SetPixel method to change intensity. Using this approach you will have to control the resolution of your hemisphere to fully cover all the voxels in the image.
Alternative approach is to construct a new image into which you create you hemisphere, then use resample filter to create a transformed version of the hemisphere in an arbitrary image.

Draw circles using for-loop Qt c++

I want to draw 7 circles using Qt in C++
I want to use a for loop;
However, I am unable to, I actually wanted to used the width of the window to equally space my circles which is not working out as well. Where am I wrong. I am new to programming. :)
Here is my code:
for (int i = 0; i <= 6;i++)
{
int x = 0;
int y = (width()/6);
x =y+x;
canvas.drawEllipse(x, 40, 20, 20);
}
Okay I was working on it and now I have five circles with this code
int x = 0;
for (int i = 0; i <= 6;i++)
{
x = x+(width()/6);
canvas.drawEllipse(x, 40, 20, 20);
}
But I want the first circle to start at
canvas.drawEllipse(0, 40, 20, 20);
In addition how can I change the color of one circle if I am switching from one page to another. Its an application with about 7 pages and each circle would represent a page so for example if I am on page 1 circle 1 should be green.
I have been told to create a function for this but how do I go about it referencing my pages and the circles. Thanks.
Let's do some math here.
Let Screen_Width be the width of the screen, in pixels.
Let Screen_Height be the height of the screen, in pixels.
The width of an ideal circle is the same as the diameter, or 2 * radius.
However, this is reality, sow we have to account for line widths.
So the actual width of a circle is: Diameter + 2 * Circle_Line_Width;
Also, this being reality and not ideal conditions, we would like spacing between the circles.
Let Spacing be the distance, in pixels between the outer lines of the circles.
Let Circle_Quantity be the number of circles.
So, the total width occupied by the circle is:
Circle_Width = Diameter + 2 * Circle_Line_Width + (Space_Between_Circles / 2);
The space available for a circle (with spacing) is:
Available_Circle_Space = Screen_Width / Circle_Quantity;
Now comes the trick, locating the centers of the circles.
Let's find out the values of the circle properties.
Solving for the diameter:
Diameter = Circle_Width / (2 * Circle_Line_Width + (Space_Between_Circles/2));
Remember, the center of the circle will be the midpoint of the diameter, which is Diameter / 2.
So, the first center point is:
0 /*Left edge ordinate */
+ (Space_Between_Circles/2)
+ Circle_Line_Width
+ (Diameter / 2)
The next center point is at:
Previous_Center_Point
+ (Space_Between_Circles/2)
+ Circle_Line_Width
+ (Diameter / 2)
This should show you how to make a for loop to draw all the circles.
Switch around the two statements in the loop:
int x = 0;
for (int i = 0; i <= 6;i++)
{
canvas.drawEllipse(x, 40, 20, 20); //0 on 1st iteration
x = x+(width()/6); //x = 0 + (width()/6), which will be used on 2nd iteration, etc.
}
To use x as 0 in the first loop, you save adding width()/6 until after you've first used it.

Optimized float Blur variations

I am looking for optimized functions in c++ for calculating areal averages of floats. the function is passed a source float array, a destination float array (same size as source array), array width and height, "blurring" area width and height.
The function should "wrap-around" edges for the blurring/averages calculations.
Here is example code that blur with a rectangular shape:
/*****************************************
* Find averages extended variations
*****************************************/
void findaverages_ext(float *floatdata, float *dest_data, int fwidth, int fheight, int scale, int aw, int ah, int weight, int xoff, int yoff)
{
printf("findaverages_ext scale: %d, width: %d, height: %d, weight: %d \n", scale, aw, ah, weight);
float total = 0.0;
int spos = scale * fwidth * fheight;
int apos;
int w = aw;
int h = ah;
float* f_temp = new float[fwidth * fheight];
// Horizontal
for(int y=0;y<fheight ;y++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel (including wrap-around edge)
for (int kx = 0; kx <= w; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Wrap
for (int kx = (fwidth-w); kx < fwidth; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Store first window
f_temp[y*fwidth] = (total / (w*2+1));
for(int x=1;x<fwidth ;x++) // x width changes with y
{
// Substract pixel leaving window
if (x-w-1 >= 0)
total -= floatdata[y*fwidth + x-w-1];
// Add pixel entering window
if (x+w < fwidth)
total += floatdata[y*fwidth + x+w];
else
total += floatdata[y*fwidth + x+w-fwidth];
// Store average
apos = y * fwidth + x;
f_temp[apos] = (total / (w*2+1));
}
}
// Vertical
for(int x=0;x<fwidth ;x++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel
for (int ky = 0; ky <= h; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Wrap
for (int ky = fheight-h; ky < fheight; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Store first if not out of bounds
dest_data[spos + x] = (total / (h*2+1));
for(int y=1;y< fheight ;y++) // y width changes with x
{
// Substract pixel leaving window
if (y-h-1 >= 0)
total -= f_temp[(y-h-1)*fwidth + x];
// Add pixel entering window
if (y+h < fheight)
total += f_temp[(y+h)*fwidth + x];
else
total += f_temp[(y+h-fheight)*fwidth + x];
// Store average
apos = y * fwidth + x;
dest_data[spos+apos] = (total / (h*2+1));
}
}
delete f_temp;
}
What I need is similar functions that for each pixel finds the average (blur) of pixels from shapes different than rectangular.
The specific shapes are: "S" (sharp edges), "O" (rectangular but hollow), "+" and "X", where the average float is stored at the center pixel on destination data array. Size of blur shape should be variable, width and height.
The functions does not need to be pixelperfect, only optimized for performance. There could be separate functions for each shape.
I am also happy if anyone can tip me of how to optimize the example function above for rectangluar blurring.
What you are trying to implement are various sorts of digital filters for image processing. This is equivalent to convolving two signals where the 2nd one would be the filter's impulse response. So far, you regognized that a "rectangular average" is separable. By separable I mean, you can split the filter into two parts. One that operates along the X axis and one that operates along the Y axis -- in each case a 1D filter. This is nice and can save you lots of cycles. But not every filter is separable. Averaging along other shapres (S, O, +, X) is not separable. You need to actually compute a 2D convolution for these.
As for performance, you can speed up your 1D averages by properly implementing a "moving average". A proper "moving average" implementation only requires a fixed amount of little work per pixel regardless of the averaging "window". This can be done by recognizing that neighbouring pixels of the target image are computed by an average of almost the same pixels. You can reuse these sums for the neighbouring target pixel by adding one new pixel intensity and subtracting an older one (for the 1D case).
In case of arbitrary non-separable filters your best bet performance-wise is "fast convolution" which is FFT-based. Checkout www.dspguide.com. If I recall correctly, there is even a chapter on how to properly do "fast convolution" using the FFT algorithm. Although, they explain it for 1-dimensional signals, it also applies to 2-dimensional signals. For images you have to perform 2D-FFT/iFFT transforms.
To add to sellibitze's answer, you can use a summed area table for your O, S and + kernels (not for the X one though). That way you can convolve a pixel in constant time, and it's probably the fastest method to do it for kernel shapes that allow it.
Basically, a SAT is a data structure that lets you calculate the sum of any axis-aligned rectangle. For the O kernel, after you've built a SAT, you'd take the sum of the outer rect's pixels and subtract the sum of the inner rect's pixels. The S and + kernels can be implemented similarly.
For the X kernel you can use a different approach. A skewed box filter is separable:
You can convolve with two long, thin skewed box filters, then add the two resulting images together. The center of the X will be counted twice, so will you need to convolve with another skewed box filter, and subtract that.
Apart from that, you can optimize your box blur in many ways.
Remove the two ifs from the inner loop by splitting that loop into three loops - two short loops that do checks, and one long loop that doesn't. Or you could pad your array with extra elements from all directions - that way you can simplify your code.
Calculate values like h * 2 + 1 outside the loops.
An expression like f_temp[ky*fwidth + x] does two adds and one multiplication. You can initialize a pointer to &f_temp[ky*fwidth] outside the loop, and just increment that pointer in the loop.
Don't do the division by h * 2 + 1 in the horizontal step. Instead, divide by the square of that in the vertical step.

Perspective correct texture mapping; z distance calculation might be wrong

I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.