Optimize c++ bitmap processing algorithm [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have written the next algorithm (for Android/NDK) to apply levels to a bitmap. The problem is that is really very slow, on a fast device such as the SGSIII can take up to 4 seconds for a 8MP image. And on devices with ARMv6 takes ages (over 10 seconds). Is there any way to optimize it?
void applyLevels(unsigned int *rgb, const unsigned int width, const unsigned int height, const float exposure, const float brightness, const float contrast, const float saturation)
{
float R, G, B;
unsigned int pixelIndex = 0;
float exposureFactor = powf(2.0f, exposure);
float brightnessFactor = brightness / 10.0f;
float contrastFactor = contrast > 0.0f ? contrast : 0.0f;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
const int pixelValue = buffer[pixelIndex];
R = ((pixelValue & 0xff0000) >> 16) / 255.0f;
G = ((pixelValue & 0xff00) >> 8) / 255.0f;
B = (pixelValue & 0xff) / 255.0f;
// Clamp values
R = R > 1.0f ? 1.0f : R < 0.0f ? 0.0f : R;
G = G > 1.0f ? 1.0f : G < 0.0f ? 0.0f : G;
B = B > 1.0f ? 1.0f : B < 0.0f ? 0.0f : B;
// Exposure
R *= exposureFactor;
G *= exposureFactor;
B *= exposureFactor;
// Contrast
R = (((R - 0.5f) * contrastFactor) + 0.5f);
G = (((G - 0.5f) * contrastFactor) + 0.5f);
B = (((B - 0.5f) * contrastFactor) + 0.5f);
// Saturation
float gray = (R * 0.3f) + (G * 0.59f) + (B * 0.11f);
R = gray * (1.0f - saturation) + R * saturation;
G = gray * (1.0f - saturation) + G * saturation;
B = gray * (1.0f - saturation) + B * saturation;
// Brightness
R += brightnessFactor;
G += brightnessFactor;
B += brightnessFactor;
// Clamp values
R = R > 1.0f ? 1.0f : R < 0.0f ? 0.0f : R;
G = G > 1.0f ? 1.0f : G < 0.0f ? 0.0f : G;
B = B > 1.0f ? 1.0f : B < 0.0f ? 0.0f : B;
// Store new pixel value
R *= 255.0f;
G *= 255.0f;
B *= 255.0f;
buffer[pixelIndex] = ((int)R << 16) | ((int)G << 8) | (int)B;
pixelIndex++;
}
}
}

Most of your computations can be trivially tabled... the whole processing can become
for (int i=0; i<n; i++) {
int px = buffer[i];
int r = tab1[(px >> 16) & 255];
int g = tab1[(px >> 8) & 255];
int b = tab1[px & 255];
gray = (kr*r + kg*g + kb*b) >> 16;
grayval = tsat1[gray];
r = brtab[tsat2[r] + grayval];
g = brtab[tsat2[g] + grayval];
b = brtab[tsat2[b] + grayval];
buffer[i] = (r << 16) | (g << 16) | b;
}
where
tab1 is a table of 256 bytes tabling the result of exposure and constrast processing
tsat1 and tsat2 are 256 bytes tables for saturation processing
brtab is a 512-bytes table for brightness processing
Note that without saturation processing you would need just a lookup per component in a 256 bytes table.
A huge speed problem can be because you are using floating-point computations where there is no dedicated hardware for it. Software implementation of floating point is really slow.

You're reducing your fast int based RGB values to slower floats and then using a lot of floating point multiplication for your adjustments. Better to multiply your adjustments (brightness, saturation etc...) by 256 and store them as ints, and don't use any floating point in your inner loop.

(1.0f - saturation) is the same everywhere, therefore you can just assign it to a variable.
Instead of >> 16) / 255.0f and >> 8) / 255.0f you can convert them to single multiplications. Or, you can divide them by 256 instead of 255 with >> 10 and >> 8 respectively:
R = ((pixelValue & 0xff0000) >> 10);
G = ((pixelValue & 0xff00) >> 2);

Several point to optimize that code
Favor integer computation, that mean that instead of transforming your RGB data from [0, 255] to [0, 1] do the inverse and transform all your contrast, brightness etc to be between 0 and 255
clipping operation can often be simplify with a clipping table to remove if-else statement .
R = clip[R'];
I notice a strange clipping section
// Clamp values
R = R > 255.0f ? 255.0f : R < 0.0f ? 0.0f : R;
G = G > 255.0f ? 255.0f : G < 0.0f ? 0.0f : G;
B = B > 255.0f ? 255.0f : B < 0.0f ? 0.0f : B;
here it look like your are still in [0, 1] range so it useless !
at the end review your formula because it seems that exposure and brightness can be fact prize to remove some op.
Finally that code is a good candidate for SIMD and MIMD, so look if MMX/SSE or OpenMP can solve your performance issue.

Related

Cubic Interpolation with the official formula fails

I am trying to implement the Cubic Interpolation method using the next formula when a=-0.5 as usual.
My Linear Interpolation and Nearest Neighbor interpolation is working great but for some reason the Cubic interpolation fails with white pixels and turn them sometimes to turquoise color and sometimes messing around with another colors.
for example using rotation: (NOTE: please look carefully on the right image and you will notice the problems)
Another Example with much more black pixels. It almost seems to work perfectly but look on the dog's tongue. (strong white pixels turn to turquoise again)
you can see that my implementation of the Linear Interpolation is working great:
Since the actual rotation worked, I think I have a small mistake in the code that I did not notice, or maybe it's a numeric error or a double / float error.
It is important to note that I read the image normally and store the destination image as follows:
cv::Mat img = cv::imread("../dogfails.jpeg");
cv::Mat rotatedImageCubic(img.rows,img.cols,CV_8UC3);
Clarifications:
Inside my cubic interpolation function, srcPoint (newX and newY) is the "landing point" from the inverse transformation.
In my inverse transformations I am not using matrix multiplication with the pixels, right now I am just using the formulas for rotation. It might be important for the "numerical errors". For example:
rotatedX = x * cos(angle * toRadian) + y * sin(angle * toRadian);
rotatedY = x * (-sin(angle * toRadian)) + y * cos(angle * toRadian);
Here is my code for the Cubic Interpolation
double cubicEquationSolver(double d,double a) {
d = abs(d);
if( 0.0 <= d && d <= 1.0) {
double score = (a + 2.0) * pow(d, 3.0) - ((a + 3.0) * pow(d, 2.0)) + 1.0;
return score;
}
else if(1 < d && d <= 2) {
double score = a * pow(d, 3.0) - 5.0*a * pow(d, 2.0) + 8.0*a * d - 4.0*a;
return score;
}
else
return 0.0;
}
void Cubic_Interpolation_Helper(const cv::Mat& src, cv::Mat& dst, const cv::Point2d& srcPoint, cv::Point2i& dstPixel) {
double newX = srcPoint.x;
double newY = srcPoint.y;
double dx = abs(newX - round(newX));
double dy = abs(newY - round(newY));
double sumCubicBValue = 0;
double sumCubicGValue = 0;
double sumCubicRValue = 0;
double sumCubicGrayValue = 0;
double uX = 0;
double uY = 0;
if (floor(newX) - 1 < 0 || floor(newX) + 2 > src.cols - 1 || floor(newY) < 0 || floor(newY) > src.rows - 1) {
if (dst.channels() > 1)
dst.at<cv::Vec3b>(dstPixel) = cv::Vec3b(0, 0,0);
else
dst.at<uchar>(dstPixel) = 0;
}
else {
for (int cNeighbor = -1; cNeighbor <= 2; cNeighbor++) {
for (int rNeighbor = -1; rNeighbor <= 2; rNeighbor++) {
uX = cubicEquationSolver(rNeighbor + dx, -0.5);
uY = cubicEquationSolver(cNeighbor + dy, -0.5);
if (src.channels() > 1) {
sumCubicBValue = sumCubicBValue + (double) src.at<cv::Vec3b>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY)))[0] * uX * uY;
sumCubicGValue = sumCubicGValue + (double) src.at<cv::Vec3b>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY)))[1] * uX * uY;
sumCubicRValue = sumCubicRValue + (double) src.at<cv::Vec3b>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY)))[2] * uX * uY;
} else {
sumCubicGrayValue = sumCubicGrayValue + (double) src.at<uchar>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY))) * uX * uY;
}
}
}
if (dst.channels() > 1)
dst.at<cv::Vec3b>(dstPixel) = cv::Vec3b((int) round(sumCubicBValue), (int) round(sumCubicGValue),
(int) round(sumCubicRValue));
else
dst.at<uchar>(dstPixel) = sumCubicGrayValue;
}
I hope someone here will be able to help me, Thanks!

How to get the calculate the RGB values of a pixel from the luminance?

I want to compute the RGB values from the luminance.
The data that I know are :
the new luminance (the value that I want to apply)
the old luminance
the old RGB values.
We can compute the luminance from the RGB values like this :
uint8_t luminance = R * 0.21 + G * 0.71 + B * 0.07;
My code is :
// We create a function to set the luminance of a pixel
void jpegImage::setLuminance(uint8_t newLuminance, unsigned int x, unsigned int y) {
// If the X or Y value is out of range, we throw an error
if(x >= width) {
throw std::runtime_error("Error : in jpegImage::setLuminance : The X value is out of range");
}
else if(y >= height) {
throw std::runtime_error("Error : in jpegImage::setLuminance : The Y value is out of range");
}
// If the image is monochrome
if(pixelSize == 1) {
// We set the pixel value to the luminance
pixels[y][x] = newLuminance;
}
// Else if the image is colored, we throw an error
else if(pixelSize == 3) {
// I don't know how to proceed
// My image is stored in a std::vector<std::vector<uint8_t>> pixels;
// This is a list that contain the lines of the image
// Each line contains the RGB values of the following pixels
// For example an image with 2 columns and 3 lines
// [[R, G, B, R, G, B], [R, G, B, R, G, B], [R, G, B, R, G, B]]
// For example, the R value with x = 23, y = 12 is:
// pixels[12][23 * pixelSize];
// For example, the B value with x = 23, y = 12 is:
// pixels[12][23 * pixelSize + 2];
// (If the image is colored, the pixelSize will be 3 (R, G and B)
// (If the image is monochrome the pixelSIze will be 1 (just the luminance value)
}
}
How can I proceed ?
Thanks !
You don't need the old luminance if you have the original RGB.
Referencing https://www.fourcc.org/fccyvrgb.php for YUV to RGB conversion.
Compute U and V from original RGB:
```
V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
```
Y is the new luminance normalized to a value between 0 and 255
Then just convert back to RGB:
B = 1.164(Y - 16) + 2.018(U - 128)
G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)
R = 1.164(Y - 16) + 1.596(V - 128)
Make sure you clamp your computed values of each equation to be in range of 0..255. Some of these formulas can convert a YUV or RGB value to something less than 0 or higher than 255.
There's also multiple formula for converting between YUV and RGB. (Different constants). I noticed the page listed above has a different computation for Y than you cited. They are all relatively close with different precisions and adjustments. For just changing the brightness of a pixel, almost any formula will do.
Updated
I originally deleted this answer after the OP suggested it wasn't working for him. I was too busy for the last few days to investigate, but I wrote some sample code to confirm my hypothesis. At the bottom of this answer is a snippet of GDI+ based code increases the luminance of an image by a variable amount. Along with the code is an image that I tested this out on and two conversions. One at 130% brightness. Another at 170% brightness.
Here's a sample conversion
Original Image
Updated Image (at 130% Y)
Updated Image (at 170% Y)
Source:
#define CLAMP(val) {val = (val > 255) ? 255 : ((val < 0) ? 0 : val);}
void Brighten(Gdiplus::BitmapData& dataIn, Gdiplus::BitmapData& dataOut, const double YMultiplier=1.3)
{
if ( ((dataIn.PixelFormat != PixelFormat24bppRGB) && (dataIn.PixelFormat != PixelFormat32bppARGB)) ||
((dataOut.PixelFormat != PixelFormat24bppRGB) && (dataOut.PixelFormat != PixelFormat32bppARGB)))
{
return;
}
if ((dataIn.Width != dataOut.Width) || (dataIn.Height != dataOut.Height))
{
// images sizes aren't the same
return;
}
const size_t incrementIn = dataIn.PixelFormat == PixelFormat24bppRGB ? 3 : 4;
const size_t incrementOut = dataOut.PixelFormat == PixelFormat24bppRGB ? 3 : 4;
const size_t width = dataIn.Width;
const size_t height = dataIn.Height;
for (size_t y = 0; y < height; y++)
{
auto ptrRowIn = (BYTE*)(dataIn.Scan0) + (y * dataIn.Stride);
auto ptrRowOut = (BYTE*)(dataOut.Scan0) + (y * dataOut.Stride);
for (size_t x = 0; x < width; x++)
{
uint8_t B = ptrRowIn[0];
uint8_t G = ptrRowIn[1];
uint8_t R = ptrRowIn[2];
uint8_t A = (incrementIn == 3) ? 0xFF : ptrRowIn[3];
auto Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16;
auto V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128;
auto U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128;
Y *= YMultiplier;
auto newB = 1.164*(Y - 16) + 2.018*(U - 128);
auto newG = 1.164*(Y - 16) - 0.813*(V - 128) - 0.391*(U - 128);
auto newR = 1.164*(Y - 16) + 1.596*(V - 128);
CLAMP(newR);
CLAMP(newG);
CLAMP(newB);
ptrRowOut[0] = newB;
ptrRowOut[1] = newG;
ptrRowOut[2] = newR;
if (incrementOut == 4)
{
ptrRowOut[3] = A; // keep original alpha
}
ptrRowIn += incrementIn;
ptrRowOut += incrementOut;
}
}
}

BGR -> YCbCr conversion not working correctly

I am trying to manually convert an image from RBG (BGR in OpenCV) to the YCbCr color space.
My image is a png color image, 800 width and 600 height, 3 channels, 16 bit depth.
Here's how I tried solving this.
cv::Mat convertToYCbCr(cv::Mat image) {
// converts an RGB image to YCbCr
// cv::Mat: B-G-R
std::cout << "Converting image to YCbCr color space." << std::endl;
int i, j;
for (i = 0; i <= image.cols; i++) {
for (j = 0; j <= image.rows; j++) {
// R, G, B values
auto R = image.at<cv::Vec3d>(j, i)[2];
auto G = image.at<cv::Vec3d>(j, i)[1];
auto B = image.at<cv::Vec3d>(j, i)[0];
// Y'
auto Y = image.at<cv::Vec3d>(j,i)[0] = 0.299 * R + 0.587 * G + 0.114 * B + 16;
// Cb
auto Cb = image.at<cv::Vec3d>(j,i)[1] = 128 + (-0.169 * R -0.331 * G + 0.5 * B);
// Cr
auto Cr = image.at<cv::Vec3d>(j,i)[2] = 128 + (0.5 * R -0.419 * G -0.081 * B);
std::cout << "At conversion: Y = " << Y << ", Cb = " << Cb << ", "
<< Cr << std::endl;
}
}
std::cout << "Converting finished." << std::endl;
return image;
}
The image I receive looks like this:
What I am expecting is this (using OpenCV method):
The vertical lines hint maybe at something? Is my loop wrong? Can I even just "replace" the RGB values with YCbCr values and expect the image to look like the example? typeid() returns the same value for both images, N2cv3MatE.
The primary reason for incorrect results being observed is the incorrect data-type used to access the image. The correct type for accessing 16 bit unsigned pixels is cv::Vec3w (not cv::Vec3d).
The next issue is that the coefficients that are being using for conversion are designed for analog signals ( YPbPr ). For digital images, we have to use coefficients designed for digital images ( YCbCr ). You can find more details on the Wikipedia article on YCbCr in section ITU-R BT.601 conversion.
The piece of information missing from the article is that how will the coefficients change if the images are of 16 bit unsigned depth or 32 bit floating point depth? The answer to this is that we will have to scale the coefficients according to the bit depth of our image.
For images with 16 bit unsigned depth, the scaling should be performed as follows:
auto Y = (R * 65.481f * scale) + (G * 128.553f * scale) + (B * 24.966f * scale) + (16.0f * offset);
auto Cb = (R * -37.797f * scale) + (G * -74.203f * scale) + (B * 112.0f * scale) + (128.0f * offset);
auto Cr = (R * 112.0f * scale) + (G * -93.786f * scale) + (B * -18.214f * scale) + (128.0f * offset);
where scale is equal to 257.0/65535.0 and offset is equal to 257.0.
This conversion technique has been adopted from MATLAB source code for rgb2ycbcr function which references the following book describing the scaling:
C.A. Poynton, "A Technical Introduction to Digital Video", John Wiley
& Sons, Inc., 1996, Chapter 9, Page 175`
Now that the conversion has been done, the third issue we face is the visualization of image similar to that of OpenCV. When we perform color conversion with OpenCV, the output image is stored in the order YCrCb instead of the usual YCbCr. So to get the same image with our custom conversion logic, we have to store values in the relevant order.
A sample conversion code may look like this:
if(image.type() == CV_16UC3)
{
const float scale = 257.0f / 65535.0f;
const float offset = 257.0f;
for (int i = 0; i < image.cols; i++)
{
for (int j = 0; j < image.rows; j++)
{
auto R = image.at<cv::Vec3w>(j, i)[2];
auto G = image.at<cv::Vec3w>(j, i)[1];
auto B = image.at<cv::Vec3w>(j, i)[0];
auto Y = (R * 65.481f * scale) + (G * 128.553f * scale) + (B * 24.966f * scale) + (16.0f * offset);
auto Cb = (R * -37.797f * scale) + (G * -74.203f * scale) + (B * 112.0f * scale) + (128.0f * offset);
auto Cr = (R * 112.0f * scale) + (G * -93.786f * scale) + (B * -18.214f * scale) + (128.0f * offset);
image.at<cv::Vec3w>(j, i)[0] = (unsigned short)Y;
image.at<cv::Vec3w>(j, i)[1] = (unsigned short)Cr;
image.at<cv::Vec3w>(j, i)[2] = (unsigned short)Cb;
}
}
}
You should use cv::cvtColor
cvtColor(src, target_image, cv::COLOR_RGB2YCrCb);
Then just flip the second and third channels.
Though you could be getting that error because you're not casting the resulting values to ints.

Computing the side planes of a 3D AABB

I have a 3D AABB defined by two sets of points a min/max.
I'd like to define the 6 planes that make up the sides of the AABB, such that any point that is within the AABB will have a positive signed-distance.
My plane definition is comprised of a normal (x,y,z) and a constant D, Corresponding to the Ax + By +Cz + D = 0 form plane equation.
struct myplane {
double nx,ny,nz;
double D;
};
Note: nx,ny, and nz are normalized.
The AABB struct is as follows:
struct myAABB {
point3d min;
point3d max;
};
I'm currently defining instances of the AABB sides like so:
myplane p0 = myplane{-1.0f, 0.0f, 0.0f,aabb.max.x);
myplane p1 = myplane{ 0.0f,-1.0f, 0.0f,aabb.max.y);
myplane p2 = myplane{ 0.0f, 0.0f,-1.0f,aabb.max.z);
myplane p3 = myplane{+1.0f, 0.0f, 0.0f,aabb.min.x);
myplane p4 = myplane{ 0.0f,+1.0f, 0.0f,aabb.min.y);
myplane p5 = myplane{ 0.0f, 0.0f,+1.0f,aabb.min.z);
where aabb is in this case is: min(-1,-1,-1) max(1,1,1)
The problem is that points in the AABB return a positive distance for the planes p0,p1 and p2, however not so for planes p3,p4 and p5, as they return negative distances which seem to indicate the points are on the other side.
For example the origin point (0,0,0) should return a positive distance of 1 for each of the planes however does not for planes p3,p4 and p5.
The signed-distance calculation being used is:
double distance(myplane& p, const point3d& v)
{
// p.normal dot v + D
return (p.nx * v.x) + (p.ny * v.y) + (p.nz * v.z) + p.D;
}
I think my equations are wrong in some way, but I can't seem to figure it out.
Signed distance from point to plane according to Chapter 2.3 of Mathematical Handbook (Korn, Korn) is
Delta = (Normal. dot. v + D) / (-Sign(D) * NormalLength)
but you don't account for D sign. Just modify function:
dt = (p.nx * v.x) + (p.ny * v.y) + (p.nz * v.z) + p.D;
return (p.D < 0) ? dt: -dt;
Xm < X < XM
is equivalent to
1.X + 0.Y + 0.Z - Xm > 0 and - 1.X + 0.Y + 0.Z + XM > 0

How to understand this RayTracer code [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
So this RT code creates a 3D image, with blur, through raw code. How is that actually done without any modelling tools?
I am currently working to understand how RT work and different ways to implement them, so this was kind of cool to see such a small amount of code producing a pretty impressive 3D image.
#include <stdlib.h> // card > aek.ppm
#include <stdio.h>
#include <math.h>
#include <fstream>
typedef int i;
typedef float f;
struct v {
f x, y, z;
v operator+(v r) {
return v(x + r.x, y + r.y, z + r.z);
}
v operator*(f r) {
return v(x * r, y * r, z * r);
}
f operator%(v r) {
return x * r.x + y * r.y + z * r.z;
}
v() {}
v operator^(v r) {
return v(y * r.z - z * r.y, z * r.x - x * r.z, x * r.y - y * r.x);
}
v(f a, f b, f c) {x = a; y = b; z = c;}
v operator!() {
return*this * (1 / sqrt(*this % *this));
}
};
i G[] = {247570, 280596, 280600, 249748, 18578, 18577, 231184, 16, 16};
f R()
{
return(f)rand() / RAND_MAX;
}
i T(v o, v d, f&t, v&n)
{
t = 1e9; i m = 0;
f p = -o.z / d.z;
if(.01 < p)t = p, n = v(0, 0, 1), m = 1;
for(i k = 19; k--;)
for(i j = 9; j--;)if(G[j] & 1 << k) {
v p = o + v(-k, 0, -j - 4);
f b = p % d, c = p % p - 1, q = b * b - c;
if(q > 0) {
f s = -b - sqrt(q);
if(s < t && s > .01)
t = s, n = !(p + d * t), m = 2;
}
}
return m;
} v S(v o, v d)
{
f t;
v n;
i m = T(o, d, t, n);
if(!m)return v(.7, .6, 1) * pow(1 - d.z, 4);
v h = o + d * t, l = !(v(9 + R(), 9 + R(), 16) + h * -1), r = d + n * (n % d * -2);
f b = l % n; if(b < 0 || T(h, l, t, n))b = 0;
f p = pow(l % r * (b > 0), 99);
if(m & 1) {
h = h * .2;
return((i)(ceil(h.x) + ceil(h.y)) & 1 ? v(3, 1, 1) : v(3, 3, 3)) * (b * .2 + .1);
} return v(p, p, p) + S(h, r) * .5;
} i
main()
{
FILE * pFile;
pFile = fopen("d:\\myfile3.ppm", "w");
fprintf(pFile,"P6 512 512 255 ");
v g = !v(-6, -16, 0), a = !(v(0, 0, 1) ^ g) * .002, b = !(g ^ a) * .002, c = (a + b) * -256 + g;
for(i y = 512; y--;)
for(i x = 512; x--;) {
v p(13, 13, 13);
for(i r = 64; r--;) {
v t = a * (R() - .5) * 99 + b * (R() - .5) * 99;
p = S(v(17, 16, 8) + t, !(t * -1 + (a * (R() + x) + b * (y + R()) + c) * 16)) * 3.5 + p;
}
fprintf(pFile, "%c%c%c", (i)p.x, (i)p.y, (i)p.z);
}
}
My dear friend that's Paul Heckbert code's right?
You could at least mention it.
For people thinking this code is unreadable, here is why:
This guy made a code that could fit on a credit card, that was the goal :)
His website: http://www.cs.cmu.edu/~ph/
Edit: Knowing the source of this code may help you understand it. Even if it'snot your main motivation...
If you are really interested in raytracing, start with other sources.
Take a look at this website http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-1-writing-a-simple-raytracer/source-code/ (Plus it talk about your code)
This code is not really special. It is basically a ray tracer that was obfuscated into a form that makes it fit on a business card (see https://www.cs.cmu.edu/~ph/).
How is that actually done without any modelling tools?
You don't need tools to render anything. You could even create a complete game of WoW (or what's hip at the moment) without any modelling tool. Modelling tools just make your live easier w.r.t. certain kinds of scenes (read: very complex ones).
You could always hardcode these data, or hack them manually into some external file.
You could also use parametric generators; Perlin Noise is one of the more popular examples thereof.
In a ray tracer, it happens that it is very simple to start out without modelling tools, as it is very simple to calculate geometric intersections between the rendering primitive "ray" and any finite geometric primitive. E.g., intersection a non-approximated, "perfect" sphere is just a few lines of code.
tl;dr: Data is just data. How you create and crunch it is completely up to you.