If I have a texture, is it then possible to generate a normal-map for this texture, so it can be used for bump-mapping?
Or how are normal maps usually made?
Yes. Well, sort of. Normal maps can be accurately made from height-maps. Generally, you can also put a regular texture through and get decent results as well. Keep in mind there are other methods of making a normal map, such as taking a high-resolution model, making it low resolution, then doing ray casting to see what the normal should be for the low-resolution model to simulate the higher one.
For height-map to normal-map, you can use the Sobel Operator. This operator can be run in the x-direction, telling you the x-component of the normal, and then the y-direction, telling you the y-component. You can calculate z with 1.0 / strength where strength is the emphasis or "deepness" of the normal map. Then, take that x, y, and z, throw them into a vector, normalize it, and you have your normal at that point. Encode it into the pixel and you're done.
Here's some older incomplete-code that demonstrates this:
// pretend types, something like this
struct pixel
{
uint8_t red;
uint8_t green;
uint8_t blue;
};
struct vector3d; // a 3-vector with doubles
struct texture; // a 2d array of pixels
// determine intensity of pixel, from 0 - 1
const double intensity(const pixel& pPixel)
{
const double r = static_cast<double>(pPixel.red);
const double g = static_cast<double>(pPixel.green);
const double b = static_cast<double>(pPixel.blue);
const double average = (r + g + b) / 3.0;
return average / 255.0;
}
const int clamp(int pX, int pMax)
{
if (pX > pMax)
{
return pMax;
}
else if (pX < 0)
{
return 0;
}
else
{
return pX;
}
}
// transform -1 - 1 to 0 - 255
const uint8_t map_component(double pX)
{
return (pX + 1.0) * (255.0 / 2.0);
}
texture normal_from_height(const texture& pTexture, double pStrength = 2.0)
{
// assume square texture, not necessarily true in real code
texture result(pTexture.size(), pTexture.size());
const int textureSize = static_cast<int>(pTexture.size());
for (size_t row = 0; row < textureSize; ++row)
{
for (size_t column = 0; column < textureSize; ++column)
{
// surrounding pixels
const pixel topLeft = pTexture(clamp(row - 1, textureSize), clamp(column - 1, textureSize));
const pixel top = pTexture(clamp(row - 1, textureSize), clamp(column, textureSize));
const pixel topRight = pTexture(clamp(row - 1, textureSize), clamp(column + 1, textureSize));
const pixel right = pTexture(clamp(row, textureSize), clamp(column + 1, textureSize));
const pixel bottomRight = pTexture(clamp(row + 1, textureSize), clamp(column + 1, textureSize));
const pixel bottom = pTexture(clamp(row + 1, textureSize), clamp(column, textureSize));
const pixel bottomLeft = pTexture(clamp(row + 1, textureSize), clamp(column - 1, textureSize));
const pixel left = pTexture(clamp(row, textureSize), clamp(column - 1, textureSize));
// their intensities
const double tl = intensity(topLeft);
const double t = intensity(top);
const double tr = intensity(topRight);
const double r = intensity(right);
const double br = intensity(bottomRight);
const double b = intensity(bottom);
const double bl = intensity(bottomLeft);
const double l = intensity(left);
// sobel filter
const double dX = (tr + 2.0 * r + br) - (tl + 2.0 * l + bl);
const double dY = (bl + 2.0 * b + br) - (tl + 2.0 * t + tr);
const double dZ = 1.0 / pStrength;
math::vector3d v(dX, dY, dZ);
v.normalize();
// convert to rgb
result(row, column) = pixel(map_component(v.x), map_component(v.y), map_component(v.z));
}
}
return result;
}
There's probably many ways to generate a Normal map, but like others said, you can do it from a Height Map, and 3d packages like XSI/3dsmax/Blender/any of them can output one for you as an image.
You can then output and RGB image with the Nvidia plugin for photoshop, an algorithm to convert it or you might be able to output it directly from those 3d packages with 3rd party plugins.
Be aware that in some case, you might need to invert channels (R, G or B) from the generated normal map.
Here's some resources link with examples and more complete explanation:
http://developer.nvidia.com/object/photoshop_dds_plugins.html
http://en.wikipedia.org/wiki/Normal_mapping
http://www.vrgeo.org/fileadmin/VRGeo/Bilder/VRGeo_Papers/jgt2002normalmaps.pdf
I don't think normal maps are generated from a texture. they are generated from a model.
just as texturing allows you to define complex colour detail with minimal polys (as opposed to just using millions of ploys and just vertex colours to define the colour on your mesh)
A normal map allows you to define complex normal detail with minimal polys.
I believe normal maps are usually generated from a higher res mesh, and then is used with a low res mesh.
I'm sure 3D tools, such as 3ds max or maya, as well as more specific tools will do this for you. unlike textures, I don't think they are usually done by hand.
but they are generated from the mesh, not the texture.
I suggest starting with OpenCV, due to its richness in algorithms. Here's one I wrote that iteratively blurs the normal map and weights those to the overall value, essentially creating more of a topological map.
#define ROW_PTR(img, y) ((uchar*)((img).data + (img).step * y))
cv::Mat normalMap(const cv::Mat& bwTexture, double pStrength)
{
// assume square texture, not necessarily true in real code
int scale = 1.0;
int delta = 127;
cv::Mat sobelZ, sobelX, sobelY;
cv::Sobel(bwTexture, sobelX, CV_8U, 1, 0, 13, scale, delta, cv::BORDER_DEFAULT);
cv::Sobel(bwTexture, sobelY, CV_8U, 0, 1, 13, scale, delta, cv::BORDER_DEFAULT);
sobelZ = cv::Mat(bwTexture.rows, bwTexture.cols, CV_8UC1);
for(int y=0; y<bwTexture.rows; y++) {
const uchar *sobelXPtr = ROW_PTR(sobelX, y);
const uchar *sobelYPtr = ROW_PTR(sobelY, y);
uchar *sobelZPtr = ROW_PTR(sobelZ, y);
for(int x=0; x<bwTexture.cols; x++) {
double Gx = double(sobelXPtr[x]) / 255.0;
double Gy = double(sobelYPtr[x]) / 255.0;
double Gz = pStrength * sqrt(Gx * Gx + Gy * Gy);
uchar value = uchar(Gz * 255.0);
sobelZPtr[x] = value;
}
}
std::vector<cv::Mat>planes;
planes.push_back(sobelX);
planes.push_back(sobelY);
planes.push_back(sobelZ);
cv::Mat normalMap;
cv::merge(planes, normalMap);
cv::Mat originalNormalMap = normalMap.clone();
cv::Mat normalMapBlurred;
for (int i=0; i<3; i++) {
cv::GaussianBlur(normalMap, normalMapBlurred, cv::Size(13, 13), 5, 5);
addWeighted(normalMap, 0.4, normalMapBlurred, 0.6, 0, normalMap);
}
addWeighted(originalNormalMap, 0.3, normalMapBlurred, 0.7, 0, normalMap);
return normalMap;
}
Related
Question: I need to upgrade an old Embarcadero VCL graphic math application by introducing antialiased lines. So, I wrote in C++ the algorithm indicated in the page: https://en.wikipedia.org/wiki/Xiaolin_Wu%27s_line_algorithm.
How to write correctly the function 'plot' to draw the pixel at (x,y) with a brightness 'c', especially on the Embarcadero VCL.
Solution:
This solution has been possible by the contribution of #Spektre (use of a union to mix colors according to some brightness). pC is a canvas pointer, funcColor is the line intended color, and are properties of the Observer class:
//Antialiased line:
void Observer::aaLine(int x0, int y0, int x1, int y1)
{
union {
uint32_t dd;//The color value
uint8_t db[4];//To work on channels: {00.RR.GG.BB}
} c, c0;//Line color, and background color
//Color mixer, with calculations on each channel, because there is no
//Alpha channel with VCL:
auto plot = [&](int X, int Y, float brightness){
c.dd = funcColor;//Line color
c0.dd = pC->Pixels[X][Y];//Background color
//Find coefficients to simulate transparency, where there is not:
//Front color is augmented when background is decreased:
for(int i = 0; i < 3; ++i)
c.db[i] = int(c.db[i] * brightness + c0.db[i] * (1 - brightness));
//Output obtained by conversion:
pC->Pixels[X][Y] = static_cast<TColor>(c.dd);
};
//Wu's algorithm:
//Fractional part of x:
auto fpart = [](double x) { return x - floor(x); };
auto rfpart = [&](double x) { return 1 - fpart(x); };
bool steep = abs(y1 - y0) > abs(x1 - x0);//Means slope > 45 deg.
if(steep) {
std::swap(x0, y0);
std::swap(x1, y1);
}
if( x0 > x1 ) {
std::swap(x0, x1);
std::swap(y0, y1);
}
double dx = x1 - x0, dy = y1 - y0, gradient = (dx == 0. ? 1. : dy/dx) ;
//Handle first endpoint
double xend = x0,
yend = y0 + gradient * (xend - x0),
xgap = rfpart(x0 + 0.5),
xpxl1 = xend, // this will be used in the main loop
ypxl1 = floor(yend);
if( steep ) {
plot(ypxl1, xpxl1, rfpart(yend) * xgap);
plot(ypxl1+1, xpxl1, fpart(yend) * xgap);
}
else {
plot(xpxl1, ypxl1 , rfpart(yend) * xgap);
plot(xpxl1, ypxl1+1, fpart(yend) * xgap);
}
auto intery = yend + gradient; // first y-intersection for the main loop
//Handle second endpoint
xend = round(x1);
yend = y1 + gradient * (xend - x1);
xgap = fpart(x1 + 0.5);
auto xpxl2 = xend, //this will be used in the main loop
ypxl2 = floor(yend);
if( steep ){
plot(ypxl2 , xpxl2, rfpart(yend) * xgap);
plot(ypxl2+1, xpxl2, fpart(yend) * xgap);
//Main loop:
for(double x = xpxl1 + 1 ; x <= xpxl2 - 1 ; x += 1) {
plot(int(intery) , x, rfpart(intery));
plot(int(intery+1), x, fpart(intery));
intery += gradient;
}
}
else {
plot(xpxl2, ypxl2, rfpart(yend) * xgap);
plot(xpxl2, ypxl2+1, fpart(yend) * xgap);
//Main loop:
for(double x = xpxl1 + 1 ; x <= xpxl2 - 1 ; x += 1) {
plot(x, int(intery), rfpart(intery));
plot(x, int(intery+1), fpart(intery));
intery += gradient;
}
}
}//Observer::aaLine.
The source code above is updated, and works for me as a solution.
The image below comes from tests: Blue's are NOT antialiased, and Red's ones are the results from the solution above. I am satisfied with what I want to do.
I think your problem lies in this:
auto plot = [&](double X, double Y, double brighness){
pC->Pixels[X][Y] = brightness; };
If I understand it correctly pC is some target TCanvas ... this has 2 major problems:
pC->Pixels[X][Y] = brightness; will handle brightness as color according to selected mode (so copy,xor,... or whatever) and not as brightness.
I would use form of alpha blending where you take originaly render color (or background) and wanted color of rendered line and mix it with brightness as parameter:
TColor c0=pC->Pixels[X][Y],c0=color of your line;
// here mix colors c = (c0*(1.0-brightness)) + (c1*brightness)
// however you need to do this according to selected pixelformat of you graphic object and color channel wise...
pC->Pixels[X][Y]=c;
Beware VCL transparency does not use alpha parameter its just opaque or not ... For more info about the mixing see similar:
Digital Differential Analyzer with Wu's Algorithm in OpenGL
especially pay attention to the:
union
{
DWORD dd;
BYTE db[4];
} c,c0;
as TColor is 32bit int anyway ...
speed of pC->Pixels[X][Y] in VCL (or any GDI based api) is pitiful at best
in case you handle many pixels you should consider to use ScanLine[Y] from Graphics::TBitmap ... and render to bitmap as backbufer. This usually improve speed from ~1000 to ~10000 times. for more info see:
Graphics rendering in C++
I have programmed a simple dragon curve fractal. It seems to work for the most part, but there is an odd logical error that shifts the rotation of certain lines by one pixel. This wouldn't normally be an issue, but after a few generations, at the right size, the fractal begins to look wonky.
I am using open cv in c++ to generate it, but I'm pretty sure it's a logical error rather than a display error. I have printed the values to the console multiple times and seen for myself that there is a one-digit difference between values that are intended to be the exact same - meaning a line may have a y of 200 at one end and 201 at another.
Here is the full code:
#include<iostream>
#include<cmath>
#include<opencv2/opencv.hpp>
const int width=500;
const int height=500;
const double PI=std::atan(1)*4.0;
struct point{
double x;
double y;
point(double x_,double y_){
x=x_;
y=y_;
}};
cv::Mat img(width,height,CV_8UC3,cv::Scalar(255,255,255));
double deg_to_rad(double degrees){return degrees*PI/180;}
point rotate(int degree, int centx, int centy, int ll) {
double radians = deg_to_rad(degree);
return point(centx + (ll * std::cos(radians)), centy + (ll * std::sin(radians)));
}
void generate(point & r, std::vector < point > & verticies, int rotation = 90) {
int curRotation = 90;
bool start = true;
point center = r;
point rot(0, 0);
std::vector<point> verticiesc(verticies);
for (point i: verticiesc) {
double dx = center.x - i.x;
double dy = center.y - i.y;
//distance from centre
int ll = std::sqrt(dx * dx + dy * dy);
//angle from centre
curRotation = std::atan2(dy, dx) * 180 / PI;
//add 90 degrees of rotation
rot = rotate(curRotation + rotation, center.x, center.y, ll);
verticies.push_back(rot);
//endpoint, where the next centre will be
if (start) {
r = rot;
start = false;
}
}
}
void gen(int gens, int bwidth = 1) {
int ll = 7;
std::vector < point > verticies = {
point(width / 2, height / 2 - ll),
point(width / 2, height / 2)
};
point rot(width / 2, height / 2);
for (int i = 0; i < gens; i++) {
generate(rot, verticies);
}
//draw lines
for (int i = 0; i < verticies.size(); i += 2) {
cv::line(img, cv::Point(verticies[i].x, verticies[i].y), cv::Point(verticies[i + 1].x, verticies[i + 1].y), cv::Scalar(0, 0, 0), 1, 8);
}
}
int main() {
gen(10);
cv::imshow("", img);
cv::waitKey(0);
return 0;
}
First, you use int to store point coordinates - that's a bad idea - you lose all accuracy of point position. Use double or float.
Second, your method for drawing fractals is not too stable numericly. You'd better store original shape and all rotation/translation/scale that indicate where and how to draw scaled copies of the original shape.
Also, I believe this is a bug:
for(point i: verices)
{
...
vertices.push_back(rot);
...
}
Changing size of vertices while inside such a for-loop might cause a crash or UB.
Turns out it was to do with floating-point precision. I changed
x=x_;
y=y_;
to
x=std::round(x_);
y=std::round(y_);
and it works.
I've made a path tracer using openCl and c++, following the basic structure in this tutorial: http://raytracey.blogspot.com/2016/11/opencl-path-tracing-tutorial-2-path.html. As far as I can tell, nothing is wrong with the path tracing algorithm itself, but I get strange stripe patterns in the image that don't match the regular noise of path tracing. striped image
There are distinct vertical stripes and more narrow horizontal ones that make the image look granular regardless of how many samples I take per pixel. Again, pixel by pixel, the path tracer seems to be working (the outlines of objects are correct even where they appear mid-stripe) as seen here: close-up.
The only difference between my code and the one in the tutorial I link is that Sam Lapere appears to be using the c++ wrapper for openCl, and I've added a couple of features like movement. There also are a few differences in how I'm handling light bounces.
I'm new to openCl. What could be causing this? It seems like it doesn't have to do with my ray tracer itself, but somehow in the way I'm implementing openCl. I'm also using an SDL texture and renderer to show the image to the screen
here is the tracer code if it helps:
kernel:
__kernel void render_kernel
(__constant struct Sphere* spheres, const int width, const int height,
const int sphere_count, __global int * output, __global float3*
pixel_buckets, __global int* counter, __constant struct Ray* camera,
__global bool* reset){
int gid = get_global_id(0);
//for movement
if (*reset){
pixel_buckets[gid] = (float3)(0,0,0);
counter[gid] = 0;
}
int xcoord = gid % width;
int ycoord = gid / width;
struct Ray camray = createCamRay(xcoord, ycoord, width, height, counter[gid], camera);
float3 final_color = trace(spheres, &camray, sphere_count, xcoord, ycoord);
counter[gid] ++;
//average colors
pixel_buckets[gid] += final_color;
output[gid] = colorInt(clampColor(pixel_buckets[gid] / counter[gid]));
}
trace:
float3 trace(__constant struct Sphere* spheres, struct Ray* camray, const int sphere_count,
unsigned int seed0, unsigned int seed1){
struct Ray ray = *camray;
struct Sphere sphere1;
sphere1.center = (float3)(0, 0, 3);
sphere1.radius = 0.7;
sphere1.color = (float3)(1,1,0);
const int bounce_count = 8;
float3 colors[20];
float3 emiss[20];
for (int bounce = 0; bounce < bounce_count; bounce ++){
int sphere_id = 0;
float hit_distance = intersectScene(spheres, &ray, &sphere_id, sphere_count);
struct Sphere hit_sphere = spheres[sphere_id];
float3 hit_point = ray.origin + (ray.direction * hit_distance);
float3 normal = normalize(hit_point - hit_sphere.center);
if (dot(normal, -ray.direction) < 0){
normal = -normal;
}
//random bounce angles
float rand_theta = get_random(seed0, seed1);
float theta = acos(sqrt(rand_theta));
float rand_phi = get_random(seed0, seed1);
float phi = 2 * PI * rand_phi;
//scales the tnb vectors
float x = sin(theta) * sin(phi);
float y = sin(theta) * cos(phi);
float n = cos(theta);
float3 hemx = normalize(cross(ray.direction, normal)) * x;
float3 hemy = normalize(cross(hemx, normal)) * y;
normal = normal * n;
float3 new_ray = normalize(hemx + hemy + normal);
ray.origin = hit_point + (normal * EPSILON);
ray.direction = new_ray;
colors[bounce] = hit_sphere.color;
emiss[bounce] = hit_sphere.emmissive;
}
colors[bounce_count] = (float3)(0,0,0);
emiss[bounce_count] = (float3)(0,0,0);
for (int i = bounce_count - 1; i >= 0; i--){
colors[i] = (colors[i] * emiss[i]) + (colors[i] * colors[i + 1]);
}
return colors[0];
}
random number generator:
float get_random(unsigned int *seed0, unsigned int *seed1) {
/* hash the seeds using bitwise AND operations and bitshifts */
*seed0 = 36969 * ((*seed0) & 65535) + ((*seed0) >> 16);
*seed1 = 18000 * ((*seed1) & 65535) + ((*seed1) >> 16);
unsigned int ires = ((*seed0) << 16) + (*seed1);
/* use union struct to convert int to float */
union {
float f;
unsigned int ui;
} res;
res.ui = (ires & 0x007fffff) | 0x40000000; /* bitwise AND, bitwise OR */
return (res.f - 2.0f) / 2.0f;
}
thanks
I am trying to implement the snake algorithm for active contour using C++ and OpenCV 3. I am working with the version that uses the gradient descent. As base test I am trying to draw a contour of a lip. This is the base image.
This is the evolution of the contour without external forces (alpha = 0.001, beta = 3, step-size=0.3).
When I add the external force, this is the result.
As external force I have used just the edge detection with Sobel derivative.
This is the code I use for points update.
array<Mat, 2> edges = edgeMatrices(croppedImage);
const float ALPHA = 0.001, BETA = 3, GAMMA = 0.3, // Gamma is step size.
a = GAMMA * ALPHA, b = GAMMA * BETA;
const uint16_t CYCLES = 1000;
const float p = b, q = -a - 4 * b, r = 1 + 2 * a + 6 * b;
Mat pMatrix = pentadiagonalMatrix(POINTS_NUM, p, q, r).inv();
for (uint16_t i = 0; i < CYCLES; ++i) {
// Extract the x and y derivatives for current points.
auto externalForces = external(edges, x, y);
x = pMatrix * (x + GAMMA * externalForces[0]);
y = pMatrix * (y + GAMMA * externalForces[1]);
// Draw the points.
if (i % 200 == 0 && i > 0)
drawPoints(croppedImage, x, y, { 0.2f * i, 0.2f * i, 0 });
}
This is the code for computing the derivatives.
array<Mat, 2> edgeMatrices(Mat &img) {
// Convert image.
Mat gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Apply scharr filter.
Mat grad_x, grad_y, blurred_x, blurred_y;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
int kernSize = 3;
Sobel(gray, grad_x, ddepth, 1, 0, kernSize, scale, delta, BORDER_DEFAULT);
Sobel(gray, grad_y, ddepth, 0, 1, kernSize, scale, delta, BORDER_DEFAULT);
GaussianBlur(grad_x, blurred_x, Size(5, 5), 30);
GaussianBlur(grad_y, blurred_y, Size(5, 5), 30);
return { blurred_x, blurred_y };
}
array<Mat, 2> external(array<Mat, 2> &edgeMat, Mat &x, Mat &y) {
array<Mat, 2> ext;
ext[0] = { Size{ 1, POINTS_NUM }, CV_32FC1 };
ext[1] = { Size{ 1, POINTS_NUM }, CV_32FC1 };
for (size_t i = 0; i < POINTS_NUM; ++i) {
ext[0].at<float>(0, i) = - edgeMat[0].at<short>(y.at<float>(0, i), x.at<float>(0, i));
ext[1].at<float>(0, i) = - edgeMat[1].at<short>(y.at<float>(0, i), x.at<float>(0, i));
}
return ext;
}
As you can see, the contour points converge in a very strange way and not towards the edge of the lip (that was the result I would expect).
I am not able to understand if it is an error about implementation or about tuning the parameters or it is just is normal behaviour and I misunderstood something about the algorithm.
I have some doubts on the derivative matrices, I think that they should be regularized in some way, but I am not sure which is the right one. Can someone help me?
The only implementations I have found are of the greedy method.
I've got an interesting problem. I'm using matrix multiplication to rotate and scale my images for my game. It works great when I scale the image down by half or more, but if the image stays its original size holes start to appear. I've attached some images of the problem below. My drawing code is below as well.
Before rotation
After rotation
Drawing code
void Graphics::drawImage(Graphics::Image i, float x, float y, float rot, float xScale, float yScale)
{
unsigned char r = 0, g = 0, b = 0;
Vector<float> pos = Vector<float>::create(0, 0);
i.setRotation(rot);
i.setXScale(xScale);
i.setYScale(yScale);
for (int j = 0; j < i.getWidth() * i.getHeight(); j++)
{
i.getPixel((j % i.getWidth()), (j / i.getWidth()), r, g, b);
SDL_SetRenderDrawColor(renderer, r, g, b, 255);
pos.elements[0] = (j % i.getWidth());
pos.elements[1] = (j / i.getWidth());
Vector<float> transPos = pos - Vector<float>::create(i.getCenterX(), i.getCenterY());
Matrix<float> trans = math::scale<float>(i.getXScale(), i.getYScale()) * math::rot<float>((double)i.getRot());
transPos = math::mult<float, float>(trans, transPos);
SDL_RenderDrawPoint(renderer, (int)x + transPos.elements[0] + (i.getCenterX() * i.getXScale()), (int)y + transPos.elements[1] + (i.getCenterY() * i.getYScale()));
}
}