Segmentation Fault when loading image pixel by pixel using CImg - c++

I am trying to compute the mean of an image by loading pixel by pixel.
My image has 6 channels, height and with are 512 and depth is 1.
It is stored at the first position of an ImgList containing 2 elements.
My code is as follows:
int main(){
float mean = 0;
CImgList<float> img;
int c, x, y;
for(c=0; c<6; ++c)
for(x=0; x<512; ++x)
for(y=0; y<512; ++y){
img.load_cimg("test_images/Simul_PolSAR.cimg", 0, 0, x, y, 0, c, x, y, 0, c);
mean += img(0)(0,0,0,0);
}
mean = mean/(6*512*512);
}
When I run it, everything works fine until the value of "c" changes from 0 to 1. Then, the line accessing img(0)(0,0,0,0) makes the program crash with a segmentation fault error.
Also if I check:
img.load_cimg("image.cimg", 0, 0, 0, 0, 0, 1, 0, 0, 0, 1);
img(0).print();
The result is:
CImg<float>: this = 0x14330f8, size = (0,0,0,0) [0 b], data = (float*)(nil) (non-shared) = [ ].
I am very sure about the correctness of the code, and the integrity of the image (I tried with different ones). Any idea why it is happening?

Related

ArrayFire: Translate a batch of images at the same time

I'm using arrayfire and I need to translate a lot of images at once and store it in a new array. The images are contained in a single array of size (w, h, c, b) and the amount by which each image needs to be translated is inside a (2, 1, 1, b) array.
The sequential implementation is as follows
for (int i=0; i<b; i++)
{
float x = coords(0, 0, 0, i).scalar<float>();
float y = coords(1, 0, 0, i).scalar<float>();
af::array t_imgs(af::span,af::span,af::span,i) =
af::translate(imgs(af::span,af::span,af::span,i), x, y);
}
How could I parallelize it? Translate doesn't accept arrays as arguments, so I can't do something like this:
gfor(af::seq i, b)
{
af::array x = coords(0, 0, 0, i);
af::array y = coords(1, 0, 0, i);
t_imgs(af::span,af::span,af::span,i) =
af::translate(imgs(af::span,af::span,af::span,i), x, y);
}

inverse fft of fft not returning expected data

I'm trying to make sure FFTW does what I think it should do, but am having problems. I'm using OpenCV's cv::Mat. I made a test program that, given a Mat f, computes ifft(fft(f)) and compares the result to f. I would expect the difference between the two to be negligible, but there's a strange pattern in the data..
In this case, f is initialized to be an 8x8 array of floats with positive values less than 1.
Here's my test program code:
Mat f = .. //populate f
if (f.type() != CV_32FC1)
DLOG << "Bad f type";
const int y = f.rows;
const int x = f.cols;
double* input = fftw_alloc_real(y * 2*(x/2 + 1));
// forward fft
fftw_plan plan = fftw_plan_dft_r2c_2d(x, y, input, (fftw_complex*)input, FFTW_MEASURE);
// inverse fft
fftw_plan iplan = fftw_plan_dft_c2r_2d(x, y, (fftw_complex*)input, input, FFTW_MEASURE);
// populate fftw data from f
for (int yi = 0; yi < y; ++yi)
{
const float* yptr = f.ptr<float>(yi);
for (int xi = 0; xi < x; ++xi)
input[yi*x + xi] = (double)yptr[xi];
}
fftw_execute(plan);
fftw_execute(iplan);
// put data into another cv::Mat for comparison
Mat check(y, x, f.type());
for (int yi = 0; yi < y; ++yi)
{
float* yptr = check.ptr<float>(yi);
for (int xi = 0; xi < x ; ++xi)
yptr[xi] = (float)input[yi*x + xi];
}
DLOG << Util::summary(f, "f");
DLOG << f;
DLOG << Util::summary(check, "check");
DLOG << check;
Mat diff = f*x*y - check;
DLOG << Util::summary(diff, "diff");
DLOG << diff;
Where DLOG is my logger and Util::summary(cv::Mat m) just prints passed string and the dimensions, channels, min, and max of the mat.
Here's what the data looks like (output):
f: rows:8 cols:8 chans:1 min:0.00257996 max:0.4
[0.050668437, 0.04509116, 0.033668514, 0.10986148, 0.12855141, 0.048241843, 0.12613985,.09731093;
0.028602425, 0.0092236707, 0.037089188, 0.118964, 0.075040311, 0.40000001, 0.11959606, 0.071930833;
0.0025799556, 0.051522054, 0.22233701, 0.052993439, 0.032000393, 0.12673819, 0.015244827, 0.044803992;
0.13946071, 0.019708242, 0.0112687, 0.047459811, 0.019342113, 0.030085485, 0.018739942, 0.0098618753;
0.041809395, 0.029681522, 0.026837418, 0.16038358, 0.29034778, 0.17247421, 0.1789207, 0.042179305;
0.025630442, 0.017192598, 0.060540862, 0.1854037, 0.21287154, 0.04813192, 0.042614728, 0.034764063;
0.0030835248, 0.018511582, 0.0071733585, 0.017076733, 0.064545207, 0.0026390438, 0.088922881, 0.045725599;
0.12798512, 0.23215951, 0.027465452, 0.03174505, 0.04352935, 0.025079668, 0.044403922, 0.035459157]
check: rows:8 cols:8 chans:1 min:-3.26489 max:25.6
[3.24278, 2.8858342, 2.1547849, 7.0311346, 8.2272902, 3.0874779, 8.0729504, 6.2278996;
0.30818239, 0, 2.373708, 7.6136961, 4.8025799, 25.6, 7.6541481, 4.6035733;
0.16511716, 3.2974114, -3.2648909, 0, 2.0480251, 8.1112442, 0.97566891, 2.8674555;
8.9254856, 1.2613275, 0.72119683, 3.0374279, -0.32588482, 0, 1.1993563, 0.63116002;
2.6758013, 1.8996174, 1.7175947, 10.264549, 18.582258, 11.038349, 0.042666838, 0;
1.6403483, 1.1003263, 3.8746152, 11.865837, 13.623778, 3.0804429, 2.7273426, 2.2249;
0.44932228, 0, 0.45909494, 1.0929109, 4.1308932, 0.16889881, 5.6910644, 2.9264383;
8.1910477, 14.858209, -0.071794562, 0, 2.7858784, 1.6050987, 2.841851, 2.2693861]
diff: rows:8 cols:8 chans:1 min:-0.251977 max:17.4945
[0, 0, 0, 0, 0, 0, 0, 0;
1.5223728, 0.59031492, 0, 0, 0, 0, 0, 0;
0, 0, 17.494459, 3.3915801, 0, 0, 0, 0;
0, 0, 0, 0, 1.5637801, 1.9254711, 0, 0;
0, 0, 0, 0, 0, 0, 11.408258, 2.6994755;
0, 0, 0, 0, 0, 0, 0, 0;
-0.2519767, 1.1847413, 0, 0, 0, 0, 0, 0;
0, 0, 1.8295834, 2.0316832, 0, 0, 0, 0]
The difficult part for me is the nonzero entries in the diff matrix. I've accounted for the scaling FFTW does on the values and the padding needed to do an in-place fft on real data; what am I missing?
I find it surprising that the data could be off by a value of 17 (which is 66% of the max value), when there are so many zeros. Also, the data irregularities seem to form a diagonal pattern.
As you may have noticed when writting fftw_alloc_real(y * 2*(x/2 + 1)); fftw needs extra space in the x direction to store complex data. In your case, as x=8, it needs 2*(x/2+1)=10 reals.
http://www.fftw.org/doc/Real_002ddata-DFT-Array-Format.html#Real_002ddata-DFT-Array-Format
So...you should take care of this as you populate the input array or retreive values from it.
You way change
input[yi*x + xi] = (double)yptr[xi];
for
int xfft=2*(x/2 + 1);
...
input[yi*xfft + xi] = (double)yptr[xi];
And
yptr[xi] = (float)input[yi*x + xi];
for
yptr[xi] = (float)input[yi*xfft + xi];
It should solve your problem since the non-nul points in your diff correspond to the extra padding.
Bye,

Using A BMP as heightmap. How to access pixel color to set up heights?

So I'm trying to create some simple terrain using a bmp and I don't know how to access the pixel color to get it's RGB value to use as the height. I understand the concept, just not how to put it into practice. This is the code I have so far. Any help is very much appreciated!
CUSTOMVERTEX vecArray[256][256];
m_pSurface = nullptr;
D3DXIMAGE_INFO imageInfo;
ZeroMemory(&imageInfo, sizeof(D3DXIMAGE_INFO));
HRESULT hr = D3DXGetImageInfoFromFile(L"heightmap.bmp", &imageInfo);
_pDevice->CreateOffscreenPlainSurface(imageInfo.Width, imageInfo.Height, D3DFMT_X8R8G8B8, D3DPOOL_SCRATCH, &m_pSurface, 0);
hr = D3DXLoadSurfaceFromFile(m_pSurface, 0, 0, L"heightmap.bmp", 0, D3DX_FILTER_NONE, 0, &imageInfo);
D3DLOCKED_RECT lockRect;
ZeroMemory(&lockRect, sizeof(D3DLOCKED_RECT));
m_pSurface->LockRect(&lockRect, 0, D3DLOCK_READONLY);
int iNumPixels = imageInfo.Width * imageInfo.Height;
int iPixelsWidth = imageInfo.Width;
int iPixelsHeight = imageInfo.Height;
for (int i = 0; i < iPixelsWidth; ++i) // HORIZONTAL ROWS
{
for (int j = 0; j < iPixelsHeight; ++j) // VERTICAL ROWS
{
vecArray[i][j].x = (float)i;
vecArray[i][j].y = (float)j;
vecArray[i][j].z = ???? // Get Height from bmp
vecArray[i][j].color = D3DCOLOR_XRGB(255, 255, 255);
}
}
m_pSurface->UnlockRect();
The struct is define as:
struct CUSTOMVERTEX
{
FLOAT x, y, z; // The untransformed, 3D position for the vertex
DWORD color;
};
Once you have the pixels mapped into a buffer, interpolate the color value as an integer (range 0 to 255) to your height range. If you are using a 3-channel image (rgb) you will need to first average the channels to one value (ie. (r + g + b) / 3). That will give you a single value for the pixel.
See "linear interpolation" to convert from pixel range to height range. If you're asking how to access the pixel values themselves, that depends on the method used to load the bitmap.

Saving part of screen to file (SOIL and glReadPixels)

I'm trying to save an image of size 5x5 pixels, read with glReadPixels into a file using SOIL.
I read the pixels:
int x = 400;
int y = 300;
std::vector< unsigned char* > rgbdata(4*5*5);
glReadPixels(x, y, 5, 5,GL_RGBA,GL_UNSIGNED_BYTE, &rgbdata[0]);
Then I try saving the read data with SOIL's save image function
int save_result = SOIL_save_image
(
"image_patch.bmp",
SOIL_SAVE_TYPE_BMP,
5, 5, 4,
rgbdata[0]
);
But when trying to save the image, I get an unhandled exception.
Solution (by Christian Rau)
int x = 400;
int y = 300;
std::vector< unsigned char > rgbdata(4*5*5);
glReadPixels(x-(5/2), y-(5/2), 5, 5,GL_RGBA,GL_UNSIGNED_BYTE, &rgbdata[0]);
int save_result = SOIL_save_image
(
"image_patch.bmp",
SOIL_SAVE_TYPE_BMP,
5, 5, 4,
rgbdata.data()
);
You are creating a vector of pointers to unsigned char (std::vector<unsigned char*>, but what you want is just a vector to unsigned char (std::vector<unsigned char>).
And in the call to SOIL_save_image you don't have to give it rgbdata[0], which would be a single unsigned char (and with your incorrect vector type an uninitialized pointer, likely resulting in some memory access error), but a pointer to the complete data and thus rgbdata.data() (or &rgbdata[0] if you don't have C++11).
Also notice :
GL's default pack/unpack image width should be multiples of 4, that is to say, width in glReadPixels(x, y, width, height, format, type, data) should meet the condition width % 4 == 0.
If width % 4 != 0(in your case 5 % 4 != 0), it may lead to unexpected results. So you also have to avoid these problems and here is the solution:
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glPixelStorei(GL_PACK_ALIGNMENT,1);

glReadPixels store x, y values

I'm trying to store pixel data by using glReadPixels, but so far I managed to only store it one pixel at a time. I'm not sure if this is the way to go. I currently have this:
unsigned char pixels[3];
glReadPixels(50,50, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixels);
What would be a good way to store it in an array, so that I can get the values like this:
pixels[20][50][0]; // x=20 y=50 -> R value
pixels[20][50][1]; // x=20 y=50 -> G value
pixels[20][50][2]; // x=20 y=50 -> B value
I guess I could simple put it in a loop:
for ( all pixels on Y axis )
{
for ( all pixels in X axis )
{
unsigned char pixels[width][height][3];
glReadPixels(x,y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixels[x][y]);
}
}
But I have the feeling that there must be a much better way to do this. But I do however need my array to be like I described above the code. So would the for loop idea be good, or is there a better way?
glReadPixels simply returns bytes in the order R, G, B, R, G, B, ... (based on your setting of GL_RGB) from the bottom left of the screen going up to the top right. From the OpenGL documentation:
glReadPixels returns pixel data from the frame buffer, starting with
the pixel whose lower left corner is at location (x, y), into client
memory starting at location data. Several parameters control the
processing of the pixel data before it is placed into client memory.
These parameters are set with three commands: glPixelStore,
glPixelTransfer, and glPixelMap. This reference page describes the
effects on glReadPixels of most, but not all of the parameters
specified by these three commands.
The overhead of calling glReadPixels thousands of times will most likely take a noticeable amount of time (depends on the window size, I wouldn't be surprised if the loop took 1-2 seconds).
It is recommended that you only call glReadPixels once and store it in a byte array of size (width - x) * (height - y) * 3. From there you can either reference a pixel's component location with data[(py * width + px) * 3 + component] where px and py are the pixel locations you want to look up, and component being the R, G, or B components of the pixel.
If you absolutely must have it in a 3-dimensional array, you can write some code to rearrange the 1d array after the glReadPixels call.
If you'll define pixel array like: this:
unsigned char pixels[MAX_Y][MAX_X][3];
And the you'll access it like this:
pixels[y][x][0] = r;
pixels[y][x][1] = g;
pixels[y][x][2] = b;
Then you'll be able to read pixels with one glReadPixels call:
glReadPixels(left, top, MAX_Y, MAX_X, GL_RGB, GL_UNSIGNED_BYTE, pixels);
What you can do is declare a simple one dimensional array in a struct and use operator overloading for convenient subscript notation
struct Pixel2d
{
static const int SIZE = 50;
unsigned char& operator()( int nCol, int nRow, int RGB)
{
return pixels[ ( nCol* SIZE + nRow) * 3 + RGB];
}
unsigned char pixels[SIZE * SIZE * 3 ];
};
int main()
{
Pixel2d p2darray;
glReadPixels(50,50, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &p.pixels);
for( int i = 0; i < Pixel2d::SIZE ; ++i )
{
for( int j = 0; j < Pixel2d::SIZE ; ++j )
{
unsigned char rpixel = p2darray(i , j , 0);
unsigned char gpixel = p2darray(i , j , 1);
unsigned char bpixel = p2darray(i , j , 2);
}
}
}
Here you are reading a 50*50 pixel in one shot and using operator()( int nCol, int nRow, int RGB) operator provides the needed convenience. For performance reasons you don't want to make too many glReadPixels calls