I submitted this to gamedev, but they seem rather slow so I hope I could find an answer here.
I've been messing with C++ AMP and OGRE in attempt to make writing to/altering textures to my liking easier on my behalf. In this I've been trying to draw a texture onto my "dynamic" texture with strange results. It appears that a solid 3/4 of my image is cropped off and it's driving me mad as I cannot seem to find the fix.
Here's a video of the problem: http://www.youtube.com/watch?v=uFWxHtHtqAI
And here's all of the necessary code for the sake of understanding even though the kernel is really where the issue at hand rests:
DynamicTexture.h
#define ValidTexCoord(x, y, width, height) ((x) >= 0 && (x) < (width) && (y) >= 0 && (y) < (height))
void TextureKernel(array<uint32, 2> &buffer, array_view<uint32, 2> texture, uint32 x, uint32 y, Real rot, Real scale, bool alpha)
{
Real
c = cos(-rot) / scale,
s = sin(-rot) / scale;
int32
//e = int32(sqrt((texture.extent[1] * texture.extent[1]) + (texture.extent[0] * texture.extent[0])) * scale * 0.5F),
dx = texture.extent[1] / 2,
dy = texture.extent[0] / 2;
parallel_for_each(buffer.extent, [=, &buffer](index<2> idx) restrict(amp)
{
int32
tex_x = int32((Real(idx[1] - x) * c) - (Real(idx[0] - y) * s)) + dx,
tex_y = int32((Real(idx[1] - x) * s) + (Real(idx[0] - y) * c)) + dy;
if(ValidTexCoord(tex_x, tex_y, texture.extent[1], texture.extent[0]))
{
if(!alpha || (alpha && texture(tex_y, tex_x) != 0))
{
buffer(idx) = texture(tex_y, tex_x);
}
}
else
{
buffer(idx) = 0x336699FF;
}
});
}
template<typename T, int32 Rank>
void SetKernel(array<T, Rank> &arr, T val)
{
parallel_for_each(arr.extent, [&arr, val](index<Rank> idx) restrict(amp)
{
arr(idx) = val;
});
}
class DynamicTexture
{
static int32
id;
array<uint32, 2>
buffer;
public:
const int32
width,
height;
TexturePtr
textureptr;
DynamicTexture(const int32 width, const int32 height, uint32 color = 0) :
width(width),
height(height),
buffer(extent<2>(height, width))
{
SetKernel(buffer, color);
textureptr = TextureManager::getSingleton().createManual("DynamicTexture" + StringConverter::toString(++id), ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TextureType::TEX_TYPE_2D, width, height, 0, PixelFormat::PF_A8R8G8B8);
}
~DynamicTexture()
{
}
void Texture(TexturePtr texture, uint32 x, uint32 y, Real rot = 0.F, Real scale = 1.F, bool alpha = false)
{
HardwarePixelBufferSharedPtr
pixelbuffer = texture->getBuffer();
TextureKernel(buffer, array_view<uint32, 2>(texture->getHeight(), texture->getWidth(), (uint32 *)pixelbuffer->lock(HardwareBuffer::HBL_READ_ONLY)), x, y, rot, scale, alpha);
pixelbuffer->unlock();
}
void CopyToBuffer()
{
HardwarePixelBufferSharedPtr
pixelbuffer = textureptr->getBuffer();
copy(buffer, stdext::make_checked_array_iterator<uint32 *>((uint32 *)pixelbuffer->lock(HardwareBuffer::HBL_DISCARD), width * height));
pixelbuffer->unlock();
}
void Reset(uint32 color)
{
SetKernel(buffer, color);
}
};
int32
DynamicTexture::id = 0;
main.cpp
void initScene()
{
dynamictexture = new DynamicTexture(window->getWidth(), window->getHeight());
TextureManager::getSingleton().load("minotaur.jpg", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, Ogre::TextureType::TEX_TYPE_2D, 0);
}
bool frameStarted(const FrameEvent &evt)
{
static Real
ang = 0.F;
ang += 0.05F;
if(ang > Math::TWO_PI)
{
ang = 0.F;
}
dynamictexture->Reset(0);
dynamictexture->Texture(TextureManager::getSingleton().getByName("minotaur.jpg"), dynamictexture->width / 2, dynamictexture->height / 2, ang, 4.F, true);
dynamictexture->CopyToBuffer();
return true;
}
As you can see, the dynamic texture is the size of the window (which in this case is 800x600) and the minotaur.jpg is 84x84. I'm simply placing it at half the width and height (center), rotating it by ang (radians), and scaling it to 4x.
In the kernel itself, I simply followed a 2D rotation matrix (where x and y are offset by the parameters 'x' and 'y'):
x' = x cosθ - y sinθ
y' = x sinθ + y cosθ
Also note that idx[1] represents the x value in the array and idx[0] represents the y because it's arranged in the manner that value = buffer[y + (x * height)] (or something along those lines, but just know it's in the correct format).
Thanks for any and all help!
Regards,
Tannz0rz
I found the solution thanks to this guy: https://sites.google.com/site/ofauckland/examples/rotating-pixels
const Real
HALF_PI = Math::HALF_PI;
const int32
cx = texture.extent[1] / 2,
cy = texture.extent[0] / 2;
parallel_for_each(buffer.extent, [=, &buffer](index<2> idx) restrict(amp)
{
int32
tex_x = idx[1] - x,
tex_y = idx[0] - y;
Real
dist = sqrt(Real((tex_x * tex_x) + (tex_y * tex_y))) / scale,
theta = atan2(Real(tex_y), Real(tex_x)) - angle - HALF_PI;
tex_x = int32(dist * sin(theta)) + cx;
tex_y = int32(dist * cos(theta)) + cy;
if(ValidTexCoord(tex_x, tex_y, texture.extent[1], texture.extent[0]))
{
buffer(idx) = texture(tex_y, tex_x);
}
});
Related
I've made a path tracer using openCl and c++, following the basic structure in this tutorial: http://raytracey.blogspot.com/2016/11/opencl-path-tracing-tutorial-2-path.html. As far as I can tell, nothing is wrong with the path tracing algorithm itself, but I get strange stripe patterns in the image that don't match the regular noise of path tracing. striped image
There are distinct vertical stripes and more narrow horizontal ones that make the image look granular regardless of how many samples I take per pixel. Again, pixel by pixel, the path tracer seems to be working (the outlines of objects are correct even where they appear mid-stripe) as seen here: close-up.
The only difference between my code and the one in the tutorial I link is that Sam Lapere appears to be using the c++ wrapper for openCl, and I've added a couple of features like movement. There also are a few differences in how I'm handling light bounces.
I'm new to openCl. What could be causing this? It seems like it doesn't have to do with my ray tracer itself, but somehow in the way I'm implementing openCl. I'm also using an SDL texture and renderer to show the image to the screen
here is the tracer code if it helps:
kernel:
__kernel void render_kernel
(__constant struct Sphere* spheres, const int width, const int height,
const int sphere_count, __global int * output, __global float3*
pixel_buckets, __global int* counter, __constant struct Ray* camera,
__global bool* reset){
int gid = get_global_id(0);
//for movement
if (*reset){
pixel_buckets[gid] = (float3)(0,0,0);
counter[gid] = 0;
}
int xcoord = gid % width;
int ycoord = gid / width;
struct Ray camray = createCamRay(xcoord, ycoord, width, height, counter[gid], camera);
float3 final_color = trace(spheres, &camray, sphere_count, xcoord, ycoord);
counter[gid] ++;
//average colors
pixel_buckets[gid] += final_color;
output[gid] = colorInt(clampColor(pixel_buckets[gid] / counter[gid]));
}
trace:
float3 trace(__constant struct Sphere* spheres, struct Ray* camray, const int sphere_count,
unsigned int seed0, unsigned int seed1){
struct Ray ray = *camray;
struct Sphere sphere1;
sphere1.center = (float3)(0, 0, 3);
sphere1.radius = 0.7;
sphere1.color = (float3)(1,1,0);
const int bounce_count = 8;
float3 colors[20];
float3 emiss[20];
for (int bounce = 0; bounce < bounce_count; bounce ++){
int sphere_id = 0;
float hit_distance = intersectScene(spheres, &ray, &sphere_id, sphere_count);
struct Sphere hit_sphere = spheres[sphere_id];
float3 hit_point = ray.origin + (ray.direction * hit_distance);
float3 normal = normalize(hit_point - hit_sphere.center);
if (dot(normal, -ray.direction) < 0){
normal = -normal;
}
//random bounce angles
float rand_theta = get_random(seed0, seed1);
float theta = acos(sqrt(rand_theta));
float rand_phi = get_random(seed0, seed1);
float phi = 2 * PI * rand_phi;
//scales the tnb vectors
float x = sin(theta) * sin(phi);
float y = sin(theta) * cos(phi);
float n = cos(theta);
float3 hemx = normalize(cross(ray.direction, normal)) * x;
float3 hemy = normalize(cross(hemx, normal)) * y;
normal = normal * n;
float3 new_ray = normalize(hemx + hemy + normal);
ray.origin = hit_point + (normal * EPSILON);
ray.direction = new_ray;
colors[bounce] = hit_sphere.color;
emiss[bounce] = hit_sphere.emmissive;
}
colors[bounce_count] = (float3)(0,0,0);
emiss[bounce_count] = (float3)(0,0,0);
for (int i = bounce_count - 1; i >= 0; i--){
colors[i] = (colors[i] * emiss[i]) + (colors[i] * colors[i + 1]);
}
return colors[0];
}
random number generator:
float get_random(unsigned int *seed0, unsigned int *seed1) {
/* hash the seeds using bitwise AND operations and bitshifts */
*seed0 = 36969 * ((*seed0) & 65535) + ((*seed0) >> 16);
*seed1 = 18000 * ((*seed1) & 65535) + ((*seed1) >> 16);
unsigned int ires = ((*seed0) << 16) + (*seed1);
/* use union struct to convert int to float */
union {
float f;
unsigned int ui;
} res;
res.ui = (ires & 0x007fffff) | 0x40000000; /* bitwise AND, bitwise OR */
return (res.f - 2.0f) / 2.0f;
}
thanks
I am trying to compute real world xyz coordinates using a Kinect v2 camera (in Linux), but my computation give me wrong results.
Here is the code:
cv::Point3f xyzWorld={0.0f};
xyzWorld.z = pointDepth;
xyzWorld.x = (float) ((float)x -(depthcx)) * xyzWorld.z / depthfx;
xyzWorld.y = (float) ((float)y - (depthcy)) * xyzWorld.z / depthfy;
xyzWorld.z = pointDepth;
return xyzWorld;
I think the problem is due to the depth value of fx, fy, cx and cy.
Can someone help me?
I am using freenect2.
Why not just use the OpenNi implementation
OniStatus VideoStream::convertDepthToWorldCoordinates(float depthX, float depthY, float depthZ, float* pWorldX, float* pWorldY, float* pWorldZ)
{
if (m_pSensorInfo->sensorType != ONI_SENSOR_DEPTH)
{
m_errorLogger.Append("convertDepthToWorldCoordinates: Stream is not from DEPTH\n");
return ONI_STATUS_NOT_SUPPORTED;
}
float normalizedX = depthX / m_worldConvertCache.resolutionX - .5f;
float normalizedY = .5f - depthY / m_worldConvertCache.resolutionY;
OniVideoMode videoMode;
int size = sizeof(videoMode);
getProperty(ONI_STREAM_PROPERTY_VIDEO_MODE, &videoMode, &size);
float const convertToMillimeters = (videoMode.pixelFormat == ONI_PIXEL_FORMAT_DEPTH_100_UM) ? 10.f : 1.f;
*pWorldX = (normalizedX * depthZ * m_worldConvertCache.xzFactor) / convertToMillimeters;
*pWorldY = (normalizedY * depthZ * m_worldConvertCache.yzFactor) / convertToMillimeters;
*pWorldZ = depthZ / convertToMillimeters;
return ONI_STATUS_OK;
}
and
OniStatus VideoStream::convertWorldToDepthCoordinates(float worldX, float worldY, float worldZ, float* pDepthX, float* pDepthY, float* pDepthZ)
{
if (m_pSensorInfo->sensorType != ONI_SENSOR_DEPTH)
{
m_errorLogger.Append("convertWorldToDepthCoordinates: Stream is not from DEPTH\n");
return ONI_STATUS_NOT_SUPPORTED;
}
*pDepthX = m_worldConvertCache.coeffX * worldX / worldZ + m_worldConvertCache.halfResX;
*pDepthY = m_worldConvertCache.halfResY - m_worldConvertCache.coeffY * worldY / worldZ;
*pDepthZ = worldZ;
return ONI_STATUS_OK;
}
and the world conversion cache :
void VideoStream::refreshWorldConversionCache()
{
if (m_pSensorInfo->sensorType != ONI_SENSOR_DEPTH)
{
return;
}
OniVideoMode videoMode;
int size = sizeof(videoMode);
getProperty(ONI_STREAM_PROPERTY_VIDEO_MODE, &videoMode, &size);
size = sizeof(float);
float horizontalFov;
float verticalFov;
getProperty(ONI_STREAM_PROPERTY_HORIZONTAL_FOV, &horizontalFov, &size);
getProperty(ONI_STREAM_PROPERTY_VERTICAL_FOV, &verticalFov, &size);
m_worldConvertCache.xzFactor = tan(horizontalFov / 2) * 2;
m_worldConvertCache.yzFactor = tan(verticalFov / 2) * 2;
m_worldConvertCache.resolutionX = videoMode.resolutionX;
m_worldConvertCache.resolutionY = videoMode.resolutionY;
m_worldConvertCache.halfResX = m_worldConvertCache.resolutionX / 2;
m_worldConvertCache.halfResY = m_worldConvertCache.resolutionY / 2;
m_worldConvertCache.coeffX = m_worldConvertCache.resolutionX / m_worldConvertCache.xzFactor;
m_worldConvertCache.coeffY = m_worldConvertCache.resolutionY / m_worldConvertCache.yzFactor;
}
struct WorldConversionCache
{
float xzFactor;
float yzFactor;
float coeffX;
float coeffY;
int resolutionX;
int resolutionY;
int halfResX;
int halfResY;
} m_worldConvertCache;
all taken from
OpenNI GitHub repository
The horizontal and vertical fov you can just get directly from the from the description of each frame.
I am using FreeType2 for fornt rendering. My problem is with font size. I have origin [0,0] in top left corner, font size set to full height of screen. My rendered result can be seen at picture.
Why is font not filling whole height of my window ?
My render code:
int devW, devH;
device->GetViewport(&devW, &devH);
float sx = 2.0f / static_cast<float>(devW);
float sy = 2.0f / static_cast<float>(devH);
x = MyMath::MyMathUtils::MapRange(0, 1, -1, 1, x);
y = MyMath::MyMathUtils::MapRange(0, 1, -1, 1, y);
MyStringWide wText = MyStringWide(text);
this->fontQuad->PrepareForRender();
for (int i = 0; i < wText.GetLength(); i++)
{
int znak = wText[i];
unsigned long c = FT_Get_Char_Index(this->fontFace, znak);
FT_Error error = FT_Load_Glyph(this->fontFace, c, FT_LOAD_RENDER);
FT_GlyphSlot glyph = this->fontFace->glyph;
MyStringAnsi textureName = "Font_Renderer_Texture";
textureName += "_";
textureName += znak;
if (GetTexturePoolInstance()->ExistTexture(textureName) == false)
{
GetTexturePoolInstance()->AddTexture2D(textureName,
glyph->bitmap.buffer, glyph->bitmap.width * glyph->bitmap.rows,
MyGraphics::A8,
glyph->bitmap.width, glyph->bitmap.rows,
false,
true);
}
float x2 = x + glyph->bitmap_left * sx;
float y2 = -y - glyph->bitmap_top * sy;
float w = glyph->bitmap.width * sx;
float h = glyph->bitmap.rows * sy;
this->fontQuad->GetEffect()->SetVector4("cornersData", MyMath::Vector4(x2, y2, w, h));
this->fontQuad->GetEffect()->SetVector4("fontColor", fontColor);
this->fontQuad->GetEffect()->SetVector4("texCoordData", MyMath::Vector4(0, 0, 1, 1));
this->fontQuad->GetEffect()->SetTexture("fontTexture", textureName);
this->fontQuad->RenderEffect("classic", 0, this->fontQuad->GetNumVertices(), 0, this->fontQuad->GetNumPrimitives());
x += (glyph->advance.x >> 6) * sx;
y += (glyph->advance.y >> 6) * sy;
}
You need to test with more complete text to understand the problem. A glyph is composed of internal leading, the ascent, the descent. What you want for your computations seems to be the Ascent. Also, you have to ask yourself if you want to support text like "Á" or lowercase "p".
I am trying to implement a bilinear interpolation function, but for some reason I am getting bad output. I cant seem to figure out what's wrong, any help getting on the right track will be appreciated.
double lerp(double c1, double c2, double v1, double v2, double x)
{
if( (v1==v2) ) return c1;
double inc = ((c2-c1)/(v2 - v1)) * (x - v1);
double val = c1 + inc;
return val;
};
void bilinearInterpolate(int width, int height)
{
// if the current size is the same, do nothing
if(width == GetWidth() && height == GetHeight())
return;
//Create a new image
std::unique_ptr<Image2D> image(new Image2D(width, height));
// x and y ratios
double rx = (double)(GetWidth()) / (double)(image->GetWidth()); // oldWidth / newWidth
double ry = (double)(GetHeight()) / (double)(image->GetHeight()); // oldWidth / newWidth
// loop through destination image
for(int y=0; y<height; ++y)
{
for(int x=0; x<width; ++x)
{
double sx = x * rx;
double sy = y * ry;
uint xl = std::floor(sx);
uint xr = std::floor(sx + 1);
uint yt = std::floor(sy);
uint yb = std::floor(sy + 1);
for (uint d = 0; d < image->GetDepth(); ++d)
{
uchar tl = GetData(xl, yt, d);
uchar tr = GetData(xr, yt, d);
uchar bl = GetData(xl, yb, d);
uchar br = GetData(xr, yb, d);
double t = lerp(tl, tr, xl, xr, sx);
double b = lerp(bl, br, xl, xr, sx);
double m = lerp(t, b, yt, yb, sy);
uchar val = std::floor(m + 0.5);
image->SetData(x,y,d,val);
}
}
}
//Cleanup
mWidth = width; mHeight = height;
std::swap(image->mData, mData);
}
Input Image (4 pixels wide and high)
My Output
Expected Output (Photoshop's Bilinear Interpolation)
Photoshop's algorithm assumes that each source pixel's color is in the center of the pixel, while your algorithm assumes that the color is in its topleft. This causes your results to be shifted half a pixel up and left compared to Photoshop.
Another way to look at it is that your algorithm maps the x coordinate range (0, srcWidth) to (0, dstWidth), while Photoshop maps (-0.5, srcWidth-0.5) to (-0.5, dstWidth-0.5), and the same in y coordinate.
Instead of:
double sx = x * rx;
double sy = y * ry;
You can use:
double sx = (x + 0.5) * rx - 0.5;
double sy = (y + 0.5) * ry - 0.5;
to get similar results. Note that this can give you a negative value for sx and sy.
If I have a texture, is it then possible to generate a normal-map for this texture, so it can be used for bump-mapping?
Or how are normal maps usually made?
Yes. Well, sort of. Normal maps can be accurately made from height-maps. Generally, you can also put a regular texture through and get decent results as well. Keep in mind there are other methods of making a normal map, such as taking a high-resolution model, making it low resolution, then doing ray casting to see what the normal should be for the low-resolution model to simulate the higher one.
For height-map to normal-map, you can use the Sobel Operator. This operator can be run in the x-direction, telling you the x-component of the normal, and then the y-direction, telling you the y-component. You can calculate z with 1.0 / strength where strength is the emphasis or "deepness" of the normal map. Then, take that x, y, and z, throw them into a vector, normalize it, and you have your normal at that point. Encode it into the pixel and you're done.
Here's some older incomplete-code that demonstrates this:
// pretend types, something like this
struct pixel
{
uint8_t red;
uint8_t green;
uint8_t blue;
};
struct vector3d; // a 3-vector with doubles
struct texture; // a 2d array of pixels
// determine intensity of pixel, from 0 - 1
const double intensity(const pixel& pPixel)
{
const double r = static_cast<double>(pPixel.red);
const double g = static_cast<double>(pPixel.green);
const double b = static_cast<double>(pPixel.blue);
const double average = (r + g + b) / 3.0;
return average / 255.0;
}
const int clamp(int pX, int pMax)
{
if (pX > pMax)
{
return pMax;
}
else if (pX < 0)
{
return 0;
}
else
{
return pX;
}
}
// transform -1 - 1 to 0 - 255
const uint8_t map_component(double pX)
{
return (pX + 1.0) * (255.0 / 2.0);
}
texture normal_from_height(const texture& pTexture, double pStrength = 2.0)
{
// assume square texture, not necessarily true in real code
texture result(pTexture.size(), pTexture.size());
const int textureSize = static_cast<int>(pTexture.size());
for (size_t row = 0; row < textureSize; ++row)
{
for (size_t column = 0; column < textureSize; ++column)
{
// surrounding pixels
const pixel topLeft = pTexture(clamp(row - 1, textureSize), clamp(column - 1, textureSize));
const pixel top = pTexture(clamp(row - 1, textureSize), clamp(column, textureSize));
const pixel topRight = pTexture(clamp(row - 1, textureSize), clamp(column + 1, textureSize));
const pixel right = pTexture(clamp(row, textureSize), clamp(column + 1, textureSize));
const pixel bottomRight = pTexture(clamp(row + 1, textureSize), clamp(column + 1, textureSize));
const pixel bottom = pTexture(clamp(row + 1, textureSize), clamp(column, textureSize));
const pixel bottomLeft = pTexture(clamp(row + 1, textureSize), clamp(column - 1, textureSize));
const pixel left = pTexture(clamp(row, textureSize), clamp(column - 1, textureSize));
// their intensities
const double tl = intensity(topLeft);
const double t = intensity(top);
const double tr = intensity(topRight);
const double r = intensity(right);
const double br = intensity(bottomRight);
const double b = intensity(bottom);
const double bl = intensity(bottomLeft);
const double l = intensity(left);
// sobel filter
const double dX = (tr + 2.0 * r + br) - (tl + 2.0 * l + bl);
const double dY = (bl + 2.0 * b + br) - (tl + 2.0 * t + tr);
const double dZ = 1.0 / pStrength;
math::vector3d v(dX, dY, dZ);
v.normalize();
// convert to rgb
result(row, column) = pixel(map_component(v.x), map_component(v.y), map_component(v.z));
}
}
return result;
}
There's probably many ways to generate a Normal map, but like others said, you can do it from a Height Map, and 3d packages like XSI/3dsmax/Blender/any of them can output one for you as an image.
You can then output and RGB image with the Nvidia plugin for photoshop, an algorithm to convert it or you might be able to output it directly from those 3d packages with 3rd party plugins.
Be aware that in some case, you might need to invert channels (R, G or B) from the generated normal map.
Here's some resources link with examples and more complete explanation:
http://developer.nvidia.com/object/photoshop_dds_plugins.html
http://en.wikipedia.org/wiki/Normal_mapping
http://www.vrgeo.org/fileadmin/VRGeo/Bilder/VRGeo_Papers/jgt2002normalmaps.pdf
I don't think normal maps are generated from a texture. they are generated from a model.
just as texturing allows you to define complex colour detail with minimal polys (as opposed to just using millions of ploys and just vertex colours to define the colour on your mesh)
A normal map allows you to define complex normal detail with minimal polys.
I believe normal maps are usually generated from a higher res mesh, and then is used with a low res mesh.
I'm sure 3D tools, such as 3ds max or maya, as well as more specific tools will do this for you. unlike textures, I don't think they are usually done by hand.
but they are generated from the mesh, not the texture.
I suggest starting with OpenCV, due to its richness in algorithms. Here's one I wrote that iteratively blurs the normal map and weights those to the overall value, essentially creating more of a topological map.
#define ROW_PTR(img, y) ((uchar*)((img).data + (img).step * y))
cv::Mat normalMap(const cv::Mat& bwTexture, double pStrength)
{
// assume square texture, not necessarily true in real code
int scale = 1.0;
int delta = 127;
cv::Mat sobelZ, sobelX, sobelY;
cv::Sobel(bwTexture, sobelX, CV_8U, 1, 0, 13, scale, delta, cv::BORDER_DEFAULT);
cv::Sobel(bwTexture, sobelY, CV_8U, 0, 1, 13, scale, delta, cv::BORDER_DEFAULT);
sobelZ = cv::Mat(bwTexture.rows, bwTexture.cols, CV_8UC1);
for(int y=0; y<bwTexture.rows; y++) {
const uchar *sobelXPtr = ROW_PTR(sobelX, y);
const uchar *sobelYPtr = ROW_PTR(sobelY, y);
uchar *sobelZPtr = ROW_PTR(sobelZ, y);
for(int x=0; x<bwTexture.cols; x++) {
double Gx = double(sobelXPtr[x]) / 255.0;
double Gy = double(sobelYPtr[x]) / 255.0;
double Gz = pStrength * sqrt(Gx * Gx + Gy * Gy);
uchar value = uchar(Gz * 255.0);
sobelZPtr[x] = value;
}
}
std::vector<cv::Mat>planes;
planes.push_back(sobelX);
planes.push_back(sobelY);
planes.push_back(sobelZ);
cv::Mat normalMap;
cv::merge(planes, normalMap);
cv::Mat originalNormalMap = normalMap.clone();
cv::Mat normalMapBlurred;
for (int i=0; i<3; i++) {
cv::GaussianBlur(normalMap, normalMapBlurred, cv::Size(13, 13), 5, 5);
addWeighted(normalMap, 0.4, normalMapBlurred, 0.6, 0, normalMap);
}
addWeighted(originalNormalMap, 0.3, normalMapBlurred, 0.7, 0, normalMap);
return normalMap;
}