C++/3D Terrain: std::vector pushback() crashes with c0000374 - c++

When attempted to push back a vector of UINT, the progrma crashes with Critical error detected c0000374. Below is the initial code:
void Terrain::CreateIndexList(UINT Width, UINT Height){
UINT sz_iList = (Width - 1)*(Height - 1) * 6;
UINT *iList = new UINT[sz_iList];
for (int i = 0; i < Width; i++){
for (int j = 0; j < Height; j++){
iList[(i + j * (Width - 1)) * 6] = ((UINT)(2 * i));
iList[(i + j * (Width - 1)) * 6 + 1] = (UINT)(2 * i + 1);
iList[(i + j * (Width - 1)) * 6 + 2] = (UINT)(2 * i + 2);
iList[(i + j * (Width - 1)) * 6 + 3] = (UINT)(2 * i + 2);
iList[(i + j * (Width - 1)) * 6 + 4] = (UINT)(2 * i + 1);
iList[(i + j * (Width - 1)) * 6 + 5] = (UINT)(2 * i + 3);
}
}
for (int i = 0; i < sz_iList; i++){
Geometry.IndexVertexData.push_back(iList[i]);
}
delete[] iList;
}
The goal is to take the generated indices from the iList array and fill the Geometry.IndexVertexData vector array. While debugging this, I've created several other implementations of this:
//After creating the iList array:
Geometry.IndexVertexData.resize(sz_iList); //Fails with "Vector subscript out of range?"
UINT in = 0;
for (int i = 0; i < Width; i++){
for (int j = 0; j < Height; j++){
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6] = iList[in];
in++;
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 1] = iList[in];
in++;
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 2] = iList[in];
in++;
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 3] = iList[in];
in++;
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 4] = iList[in];
in++;
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 5] = iList[in];
in++;
}
}
And a final, direct to vector implementation:
Geometry.IndexVertexData.reserve(sz_iList);
for (int index = 0; index < sz_iList; index+=6) {
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6] = ((UINT)(2 * i));
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 1] = (UINT)(2 * i + 1);
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 2] = (UINT)(2 * i + 2);
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 3] = (UINT)(2 * i + 2);
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 4] = (UINT)(2 * i + 1);
Geometry.IndexVertexData[(i + j*(Width - 1)) * 6 + 5] = (UINT)(2 * i + 3);
}
sz_iList has a final value of 2166, resultant from a grid of 20x20 (400 total points) and is used to initialize sizes. In all cases, the vector array would not fully fill, crashing with Critical error detected c0000374. Am I doing something wrong?

Your sz_iList doesn't appear to be big enough. Let's use a simple example of Width = Height = 2;, then sz_iList = (2 - 1) * (2 - 1) * 6 = 6, right? But in your nested loops, the last iteration occurs when i = j = 1 (i is one less than Width and j is one less than Height), where (in the last line of your loop), you try to access element (i + j * (Width - 1)) * 6 + 5 = (1 + 1 * (2 - 1)) * 6 + 5 = (1 + 1 * 1) * 6 + 5 = 2 * 6 + 5 = 17, which is bigger than the size of your array. This results in undefined behavior.

Related

how to rotate raster 32bits properly?

I'm working on a 2D Graphics Engine, when I use the following code to rotate the images I get 'write access violation' exception for newBits if the image dimensions have even numbers. There is no problem on odd numbered dimensions.
Here is my image rotation code :
bool Graphics::Raster::rotate(float angle)
{
try {
unsigned int xOrigin{ mWidth / 2 };
unsigned int yOrigin{ mHeight / 2 };
std::array<Math::Vector2D, 4> boundingBoxVertices;
Math::Matrix2x2 rotationMatrix;
boundingBoxVertices[0].setX((float)xOrigin * -1.0f);
boundingBoxVertices[0].setY((float)yOrigin);
boundingBoxVertices[1].setX((float)(mWidth - xOrigin) * 1.0f);
boundingBoxVertices[1].setY((float)yOrigin);
boundingBoxVertices[2].setX((float)(mWidth - xOrigin) * 1.0f);
boundingBoxVertices[2].setY((float)(mHeight - yOrigin) * -1.0f);
boundingBoxVertices[3].setX((float)xOrigin * -1.0f);
boundingBoxVertices[3].setY((float)(mHeight - yOrigin) * -1.0f);
int x{ 0 }, y{ 0 }, maxX{ 0 }, minX{ 0 }, maxY{ 0 }, minY{ 0 };
rotationMatrix.setToRotation(angle);
for (size_t i = 0; i < 4; ++i) {
boundingBoxVertices[i] *= rotationMatrix;
boundingBoxVertices[i].round();
x = (int)boundingBoxVertices[i].getX();
y = (int)boundingBoxVertices[i].getY();
if (x < minX) {
minX = x;
}
if (x > maxX) {
maxX = x;
}
if (y < minY) {
minY = y;
}
if (y > maxY) {
maxY = y;
}
}
size_t newWidth = (size_t)(maxX - minX);
size_t newHeight = (size_t)(maxY - minY);
BYTE* newBits{ nullptr };
if (newBits = new BYTE[newWidth * newHeight * 4]{ 0 }) {
int newOrgX = newWidth / 2;
int newOrgY = newHeight / 2;
Math::Vector2D pixVec{ 0.0f, 0.0f };
int oldCoordX{ 0 };
int oldCoordY{ 0 };
int newCoordX{ 0 };
int newCoordY{ 0 };
unsigned int oldIndex{ 0 };
unsigned int newIndex{ 0 };
for (size_t i = 0; i < mWidth * mHeight; ++i) {
oldCoordX = i % mWidth - xOrigin;
oldCoordY = yOrigin - i / mWidth;
pixVec.setX((float)oldCoordX);
pixVec.setY((float)oldCoordY);
pixVec *= rotationMatrix;
pixVec.round();
newCoordX = (unsigned int)(pixVec.getX() + newOrgX);
newCoordY = (unsigned int)(newOrgY - pixVec.getY());
oldIndex = i * 4;
newIndex = (newCoordY * newWidth * 4) + ((newCoordX) * 4);
newBits[newIndex + 0] = m32Bits[oldIndex + 0];
newBits[newIndex + 1] = m32Bits[oldIndex + 1];
newBits[newIndex + 2] = m32Bits[oldIndex + 2];
newBits[newIndex + 3] = m32Bits[oldIndex + 3];
}
if (angle != 0.0f || angle != 90.0f || angle != 180.0f || angle != 270.0f || angle != 360.0f ||
angle != -0.0f || angle != -90.0f || angle != -180.0f || angle != -270.0f || angle != -360.0f ) {
for (size_t i = 0; i < newHeight; ++i) {
for (size_t j = 0; j < newWidth; ++j) {
if (j != 0 && j != newWidth - 1) {
if (newBits[(i * newWidth * 4) + (j * 4) + 0] == 0 &&
newBits[(i * newWidth * 4) + (j * 4) + 1] == 0 &&
newBits[(i * newWidth * 4) + (j * 4) + 2] == 0 &&
newBits[(i * newWidth * 4) + (j * 4) + 3] == 0) {
newBits[(i * newWidth * 4) + (j * 4) + 0] = (newBits[(i * newWidth * 4) + ((j - 1) * 4) + 0] +
newBits[(i * newWidth * 4) + ((j + 1) * 4) + 0]) / 2;
newBits[(i * newWidth * 4) + (j * 4) + 1] = (newBits[(i * newWidth * 4) + ((j - 1) * 4) + 1] +
newBits[(i * newWidth * 4) + ((j + 1) * 4) + 1]) / 2;
newBits[(i * newWidth * 4) + (j * 4) + 2] = (newBits[(i * newWidth * 4) + ((j - 1) * 4) + 2] +
newBits[(i * newWidth * 4) + ((j + 1) * 4) + 2]) / 2;
newBits[(i * newWidth * 4) + (j * 4) + 3] = (newBits[(i * newWidth * 4) + ((j - 1) * 4) + 3] +
newBits[(i * newWidth * 4) + ((j + 1) * 4) + 3]) / 2;
}
}
}
}
}
if (set32Bits(newBits, newWidth, newHeight)) {
delete[] newBits;
return true;
} else {
delete[] newBits;
return false;
}
} else {
throw Error::Exception(L"Resim çevirme işlemi için hafızada yer açılamadı", L"Resim Düzenleme Hatası");
}
} catch (Error::Exception& ex) {
Error::ShowError(ex.getErrorMessage(), ex.getErrorTitle());
return false;
}
}
What am I doing wrong here?
I can't use a third-party to rotate the images, I must use this function.
Thanks in advance.
The problem was a index problem when setting the newBits.
Here is the updated function :
BYTE* Graphics::RotateBits(const BYTE* bits, const int width, const int height, float angle, int* newWidth, int* newHeight)
{
try {
int xOrigin{ width / 2 };
int yOrigin{ height / 2 };
std::array<Math::Vector2D, 4> boundingBoxVertices;
Math::Matrix2x2 rotationMatrix;
boundingBoxVertices[0].setX((float)xOrigin * -1.0f);
boundingBoxVertices[0].setY((float)yOrigin);
boundingBoxVertices[1].setX((float)(width - xOrigin) * 1.0f);
boundingBoxVertices[1].setY((float)yOrigin);
boundingBoxVertices[2].setX((float)(width - xOrigin) * 1.0f);
boundingBoxVertices[2].setY((float)(height - yOrigin) * -1.0f);
boundingBoxVertices[3].setX((float)xOrigin * -1.0f);
boundingBoxVertices[3].setY((float)(height - yOrigin) * -1.0f);
int x{ 0 }, y{ 0 }, maxX{ 0 }, minX{ 0 }, maxY{ 0 }, minY{ 0 };
rotationMatrix.setToRotation(angle);
for (int i = 0; i < 4; ++i) {
boundingBoxVertices[i] *= rotationMatrix;
boundingBoxVertices[i].round();
x = (int)boundingBoxVertices[i].getX();
y = (int)boundingBoxVertices[i].getY();
if (x < minX) {
minX = x;
}
if (x > maxX) {
maxX = x;
}
if (y < minY) {
minY = y;
}
if (y > maxY) {
maxY = y;
}
}
*newWidth = (maxX - minX);
*newHeight = (maxY - minY);
BYTE* newBits = new BYTE[*newWidth * *newHeight * 4]{ 0 };
int newOrgX = *newWidth / 2;
int newOrgY = *newHeight / 2;
Math::Vector2D pixVec{ 0.0f, 0.0f };
int oldCoordX{ 0 };
int oldCoordY{ 0 };
int newCoordX{ 0 };
int newCoordY{ 0 };
int oldIndex{ 0 };
int newIndex{ 0 };
for (int i = 0; i < width * height; ++i) {
oldCoordX = i % width - xOrigin;
oldCoordY = yOrigin - i / width;
pixVec.setX((float)oldCoordX);
pixVec.setY((float)oldCoordY);
pixVec *= rotationMatrix;
pixVec.round();
newCoordX = (int)pixVec.getX() + newOrgX;
newCoordY = newOrgY - (int)pixVec.getY();
oldIndex = i * 4;
newIndex = (newCoordY * *newWidth * 4) + (newCoordX * 4);
if (newIndex >= 0 && newIndex <= *newWidth * *newHeight * 4 - 4) {
newBits[newIndex + 0] = bits[oldIndex + 0];
newBits[newIndex + 1] = bits[oldIndex + 1];
newBits[newIndex + 2] = bits[oldIndex + 2];
newBits[newIndex + 3] = bits[oldIndex + 3];
}
}
if (((int)angle) % 90) {
int index{ 0 };
int prevIndex{ 0 };
int nextIndex{ 0 };
for (int i = 0; i < *newHeight; ++i) {
for (int j = 0; j < *newWidth; ++j) {
if (j != 0 && j != *newWidth - 1) {
index = (i * *newWidth * 4) + (j * 4);
if (newBits[index + 0] == 0 &&
newBits[index + 1] == 0 &&
newBits[index + 2] == 0 &&
newBits[index + 3] == 0) {
prevIndex = (i * *newWidth * 4) + ((j - 1) * 4);
nextIndex = (i * *newWidth * 4) + ((j + 1) * 4);
newBits[index + 0] = (newBits[prevIndex + 0] + newBits[nextIndex + 0]) / 2;
newBits[index + 1] = (newBits[prevIndex + 1] + newBits[nextIndex + 1]) / 2;
newBits[index + 2] = (newBits[prevIndex + 2] + newBits[nextIndex + 2]) / 2;
newBits[index + 3] = (newBits[prevIndex + 3] + newBits[nextIndex + 3]) / 2;
}
}
}
}
}
return newBits;
} catch (Error::Exception& ex) {
Error::ShowError(ex.getErrorMessage(), ex.getErrorTitle());
return nullptr;
} catch (std::exception& ex) {
Error::ShowError((LPCWSTR)ex.what(), L"Bit Düzenleme Hatası");
return nullptr;
}
}

exception at memory location (vector issue) opencv

I am trying to find average of 2x2 block pixels within a window of 6x6 of overall image size mxn. I can able to find the average of block till the end of first row and when the code has to move to next row, it throws runtime error "exception at memory location"
vector<int>m; vector<int>m1; vector<int>m2; vector<int>m3;vector<int>m4; vector<int>m5; vector<int>m6; vector<int>m7; vector<int>m8;
for (int i = 2; i < road.rows - 2 ; i++){
for (int j = 2; j < road.cols - 2 ; j++){
//center block
int avg=(round((road.at<uchar>(i, j) + road.at<uchar>(i, j + 1) + road.at<uchar>(i + 1, j) + road.at<uchar>(i + 1, j + 1)) / 4));
//top left block
int avg1= (round((road.at<uchar>(i - 2, j - 2) + road.at<uchar>(i - 2, j - 1) + road.at<uchar>(i - 1, j - 2) + road.at<uchar>(i - 1, j - 1)) / 4));
//top
int avg2 = (round((road.at<uchar>(i - 2, j) + road.at<uchar>(i - 2, j + 1) + road.at<uchar>(i - 1, j) + road.at<uchar>(i - 1, j + 1)) / 4));
//top right block
int avg3 = (round((road.at<uchar>(i - 2, j + 2) + road.at<uchar>(i - 2, j + 3) + road.at<uchar>(i - 1, j + 2) + road.at<uchar>(i - 1, j + 3)) / 4));
//left block
int avg4 = (round((road.at<uchar>(i, j - 2) + road.at<uchar>(i, j - 1) + road.at<uchar>(i + 1, j - 2) + road.at<uchar>(i + 1, j - 1)) / 4));
//right block
int avg5 = (round((road.at<uchar>(i, j + 2) + road.at<uchar>(i, j + 3) + road.at<uchar>(i + 1, j + 2) + road.at<uchar>(i + 1, j + 3)) / 4));
//bottom left block
int avg6 = (round((road.at<uchar>(i + 2, j - 2) + road.at<uchar>(i + 2, j - 1) + road.at<uchar>(i + 3, j - 2) + road.at<uchar>(i + 3, j - 1)) / 4));
//bottom
int avg7 = (round((road.at<uchar>(i + 2, j) + road.at<uchar>(i + 2, j + 1) + road.at<uchar>(i + 3, j) + road.at<uchar>(i + 3, j + 1)) / 4));
//bottom right block
int avg8 = (round((road.at<uchar>(i + 2, j + 2) + road.at<uchar>(i + 2, j + 3) + road.at<uchar>(i + 3, j + 2) + road.at<uchar>(i + 3, j + 3)) / 4));
m.push_back(avg);
m1.push_back(avg1);
m2.push_back(avg2);
m3.push_back(avg3);
m4.push_back(avg4);
m5.push_back(avg5);
m6.push_back(avg6);
m7.push_back(avg7);
m8.push_back(avg8);
}
}
Help me out of this error;

I met some errors when I want to convert IpLmage to mat, opencv

void LBP(Mat src, IplImage* dst)
{
int tmp[8] = { 0 };
CvScalar s;
Mat temp = Mat(src.size(), IPL_DEPTH_8U, 1);
uchar *data = (uchar*)src.data;
int step = src.step;
//cout << "step" << step << endl;
for (int i = 1; i<src.size().height - 1; i++)
for (int j = 1; j<src.size().width - 1; j++)
{
int sum = 0;
if (data[(i - 1)*step + j - 1]>data[i*step + j])
tmp[0] = 1;
else
tmp[0] = 0;
if (data[i*step + (j - 1)]>data[i*step + j])
tmp[1] = 1;
else
tmp[1] = 0;
if (data[(i + 1)*step + (j - 1)]>data[i*step + j])
tmp[2] = 1;
else
tmp[2] = 0;
if (data[(i + 1)*step + j]>data[i*step + j])
tmp[3] = 1;
else
tmp[3] = 0;
if (data[(i + 1)*step + (j + 1)]>data[i*step + j])
tmp[4] = 1;
else
tmp[4] = 0;
if (data[i*step + (j + 1)]>data[i*step + j])
tmp[5] = 1;
else
tmp[5] = 0;
if (data[(i - 1)*step + (j + 1)]>data[i*step + j])
tmp[6] = 1;
else
tmp[6] = 0;
if (data[(i - 1)*step + j]>data[i*step + j])
tmp[7] = 1;
else
tmp[7] = 0;
s.val[0] = (tmp[0] * 1 + tmp[1] * 2 + tmp[2] * 4 + tmp[3] * 8 + tmp[4] * 16 + tmp[5] * 32 + tmp[6] * 64 + tmp[7] * 128);
cvSet2D(dst, i, j, s);
}
}
Above is my original code for local binary pattern, src is an input matrix and dst is the output. Now i want to change the IPLimage in void LBP(Mat src, IplImage* dst) to void LBP(Mat src, mat dst), i tried many ways but I always met problems like assertion failed or something else, i thinks it might be the problem of cvSet2D(dst, i, j, s);
this is the definition for the input src:
Mat Gray_face = Mat(image.size(), image.depth(), 1);
this is my definition for output dst:
IplImage* lbp_face = cvCreateImage(Gray_face.size(), IPL_DEPTH_8U, 1);
And i want to change it to mat and make it work for my program.
this is how i call the LBP function:
LBP(Gray_face, lbp_face);
I am quite new to this, can anyone help me? thank you very much!
Indeed cvSet2D is the old interface. To set a point to a cv::Mat dst you can use at(). Fisrt, you first (re)allocate it as:
dst.create(src.size(),CV_8U);
Then to set a value point in your case:
dst.at<char>(i,j) = (tmp[0] * 1 + tmp[1] * 2 + tmp[2] * 4 + tmp[3] * 8 + tmp[4] * 16 + tmp[5] * 32 + tmp[6] * 64 + tmp[7] * 128);
Last but not least, if you want the result returned, you function definition should be:
void LBP(Mat src, Mat & dst);
with a reference (&) to the destination.

CUDA: working with arrays of different sizes

In this example, I am trying to create an 10x8 array using values from a 10x9 array. It looks like I am accessing memory incorrectly but I am not sure where my error is.
The code in C++ would be something like
for (int h = 0; h < height; h++){
for (int i = 0; i < (width-2); i++)
dd[h*(width-2)+i] = hi[h*(width-1)+i] + hi[h*(width-1)+i+1];
}
This is what I am trying in CUDA:
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <stdio.h>
#include <stdint.h>
#include <iostream>
#define TILE_WIDTH 4
using namespace std;
__global__ void cudaOffsetArray(int height, int width, float *HI, float *DD){
int x = blockIdx.x * blockDim.x + threadIdx.x; // Col // width
int y = blockIdx.y * blockDim.y + threadIdx.y; // Row // height
int grid_width = gridDim.x * blockDim.x;
//int index = y * grid_width + x;
if ((x < (width - 2)) && (y < (height)))
DD[y * (grid_width - 2) + x] = (HI[y * (grid_width - 1) + x] + HI[y * (grid_width - 1) + x + 1]);
}
int main(){
int height = 10;
int width = 10;
float *HI = new float [height * (width - 1)];
for (int i = 0; i < height; i++){
for (int j = 0; j < (width - 1); j++)
HI[i * (width - 1) + j] = 1;
}
float *gpu_HI;
float *gpu_DD;
cudaMalloc((void **)&gpu_HI, (height * (width - 1) * sizeof(float)));
cudaMalloc((void **)&gpu_DD, (height * (width - 2) * sizeof(float)));
cudaMemcpy(gpu_HI, HI, (height * (width - 1) * sizeof(float)), cudaMemcpyHostToDevice);
dim3 dimGrid((width - 1) / TILE_WIDTH + 1, (height - 1)/TILE_WIDTH + 1, 1);
dim3 dimBlock(TILE_WIDTH, TILE_WIDTH, 1);
cudaOffsetArray<<<dimGrid,dimBlock>>>(height, width, gpu_HI, gpu_DD);
float *result = new float[height * (width - 2)];
cudaMemcpy(result, gpu_DD, (height * (width - 2) * sizeof(float)), cudaMemcpyDeviceToHost);
for (int i = 0; i < height; i++){
for (int j = 0; j < (width - 2); j++)
cout << result[i * (width - 2) + j] << " ";
cout << endl;
}
cudaFree(gpu_HI);
cudaFree(gpu_DD);
delete[] result;
delete[] HI;
system("pause");
}
I've also tried this in the global function:
if ((x < (width - 2)) && (y < (height)))
DD[y * (grid_width - 2) + (blockIdx.x - 2) * blockDim.x + threadIdx.x] =
(HI[y * (grid_width - 1) + (blockIdx.x - 1) * blockDim.x + threadIdx.x] +
HI[y * (grid_width - 1) + (blockIdx.x - 1) * blockDim.x + threadIdx.x + 1]);
To "fix" your code, change each use of grid_width to width in this line in your kernel:
DD[y * (grid_width - 2) + x] = (HI[y * (grid_width - 1) + x] + HI[y * (grid_width - 1) + x + 1]);
Like this:
DD[y * (width - 2) + x] = (HI[y * (width - 1) + x] + HI[y * (width - 1) + x + 1]);
Explanation:
Your grid_width:
dim3 dimGrid((width * 2 - 1) / TILE_WIDTH + 1, (height - 1)/TILE_WIDTH + 1, 1);
dim3 dimBlock(TILE_WIDTH, TILE_WIDTH, 1);
doesn't actually correspond to your array size (10x10, or 10x9, or 10x8). I"m not sure why you're launching 2*width threads in the x dimension, but this means that your thread array is considerably larger than your data array.
So when you use grid_width in the kernel:
DD[y * (grid_width - 2) + x] = (HI[y * (grid_width - 1) + x] + HI[y * (grid_width - 1) + x + 1]);
the indexing will be a problem. If you instead change each instance of grid_width above to just width (which corresponds to the actual width of your data array) you'll get better indexing, I think. Normally it's not an issue to launch "extra threads" because you have a thread check line in your kernel:
if ((x < (width - 2)) && (y < (height)))
but when you launch extra threads, it is making your grid larger, and so you can't use grid dimensions to index properly into your data array.

How Can I Remove Pixel Noise from ofxKinect Video?

I'm looking for some help figuring out how to remove some low quality pixel noise from a video, that I'm obtaining from an xbox kinect via open frameworks. I'm running logic against "moving" parts of an image, to determine what color is moving the most, and use those regions to also detect the depth of which those pixels are moving. I'm attaching a photo to try to better explain my issue.
http://imago.bryanmoyles.com/xxw80
Of course I know code will be asked for, so I'll post what I have so far, but what I'm looking for more than anything else, is a good algorithm for smoothing out pixelated regions in a photo using C++
for(int y = 0; y < kinect.height; y += grid_size) {
for(int x = 0; x < kinect.width * 3; x += 3 * grid_size) {
unsigned int total_r = 0, total_b = 0, total_g = 0;
for(int r = 0; r < grid_size; r++) {
for(int c = 0; c < grid_size; c++) {
total_r += color_pixels[(y * kinect.width * 3 + r * kinect.width * 3) + (c * 3 + x + 0)];
total_b += color_pixels[(y * kinect.width * 3 + r * kinect.width * 3) + (c * 3 + x + 1)];
total_g += color_pixels[(y * kinect.width * 3 + r * kinect.width * 3) + (c * 3 + x + 2)];
}
}
unsigned char average_r = total_r / (grid_size * grid_size),
average_b = total_b / (grid_size * grid_size),
average_g = total_g / (grid_size * grid_size);
for(int r = 0; r < grid_size; r++) {
for(int c = 0; c < grid_size; c++) {
color_pixels[(y * kinect.width * 3 + r * kinect.width * 3) + (c * 3 + x + 0)] = average_r;
color_pixels[(y * kinect.width * 3 + r * kinect.width * 3) + (c * 3 + x + 1)] = average_b;
color_pixels[(y * kinect.width * 3 + r * kinect.width * 3) + (c * 3 + x + 2)] = average_g;
}
}
}
}
for(int y = 0; y < kinect.height; y++) {
for (int x = 0; x < kinect.width * 3; x += 3) {
int total_difference = abs(color_pixels[y * kinect.width * 3 + x + 0] - rgb[0])
+ abs(color_pixels[y * kinect.width * 3 + x + 1] - rgb[1])
+ abs(color_pixels[y * kinect.width * 3 + x + 2] - rgb[2]);
unsigned char defined_color;
if(total_difference < 40) {
defined_color = (unsigned char) 255;
} else {
defined_color = (unsigned char) 0;
}
color_pixels[y * kinect.width * 3 + x + 0] = defined_color;
color_pixels[y * kinect.width * 3 + x + 1] = defined_color;
color_pixels[y * kinect.width * 3 + x + 2] = defined_color;
}
}
Again, I'd like to reiterate that my code is not the problem, I'm simply posting it here so that you understand I'm not just asking blindly. What I really need, is some direction on how to smooth out pixelated images, so that my averages don't get messed up frame by frame by poor quality.
You can process your image from the camera with some methods from the ofxOpenCV addon. There you will have methods like blur, undistort, erode, etc. Its easy to setup, because its already an addon. Have a look at the openCvExample which should be packed with your openFrameworks. For more information on the mentioned methods, take a look here. If I understand your problem correctly, then a little blur on the image could fix your problem already.