Sometimes I get EXEC_BAD_ACCESS (Access violation) when reversing an array - c++

I am loading an image using the OpenEXR library.
This works fine, except the image is loaded rotated 180 degrees. I use the loop shown below to reverse the array but sometimes the program will quit and xcode will give me an EXEC_BAD_ACCESS error (Which I assume is the same as an access violation in msvc). It does not happen everytime, just once every 5-10 times.
Ideally I'd want to reverse the array in place, although that led to errors everytime and using memcpy would fail but without causing an error, just a blank image. I'd like to know what's causing this problem first.
Here is the code I am using: (Rgba is a struct of 4 "Half"s r, g, b, and a, defined in OpenEXR)
Rgba* readRgba(const char filename[], int& width, int& height){
Rgba* pixelBuffer = new Rgba[width * height];
Rgba* temp = new Rgba[width * height];
// ....EXR Loading code....
// TODO: *Sometimes* the following code results in a bad memory access error. No idea why.
// Flip the image to conform with OpenGL coordinates.
for (int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
temp[(i*width)+j] = pixelBuffer[(width*height)-(i*width)+j];
}
}
delete pixelBuffer;
return temp;
}
Thanks in advance!

Change:
temp[(i*width)+j] = pixelBuffer[(width*height)-(i*width)+j];
to:
temp[(i*width)+j] = pixelBuffer[(width*height)-(i*width)+j - 1];
(Hint: think about what happens when i = 0 and j = 0 !)

And here's how you can optimize this code, to save memory and for cycles:
Rgba* readRgba(const char filename[], int& width, int& height)
{
Rgba* pixelBuffer = new Rgba[width * height];
Rgba tempPixel;
// ....EXR Loading code....
// Flip the image to conform with OpenGL coordinates.
for (int i = 0; i <= height/2; i++)
for(int j = 0; j < width && (i*width + j) <= (height*width/2); j++)
{
tempPixel = pixelBuffer[i*width + j];
pixelBuffer[i*width + j] = pixelBuffer[height*width - (i*width + j) -1];
pixelBuffer[height*width - (i*width + j) -1] = tempPixel;
}
return pixelBuffer;
}
Note that optimal (from a memory usage best practices point of view) would be to pass pixelBuffer* as a parameter and already allocated. It's a good practice to allocate and release the memory in the same piece of code.

Related

SDL surfaces and BMPs

I have been attempting to work with SDL and openGL for a project Im working on, and to enable easy testing, I would like to be able to draw in 2D to the screen and the only way I have found to allow me to do this is SDL surfaces to create and draw BMP images. This is fine as being able to save the image will be a nice feature later on but if there is another better way to do this with openGL or some other method, please say :).
This is the code I am currently using:
int w = 255;
int h = 255;
SDL_Surface* surface = SDL_CreateRGBSurface(0,w,h,32,0,0,0,0);
SDL_LockSurface(surface);
int bpp = surface->format->BitsPerPixel;
for (int i = 0; i < h; i++)
{
for (int j = 0; j < w; j++)
{
Uint32 *p = (Uint32 *)surface->pixels + (i * surface->pitch) + (j * bpp);
*p = SDL_MapRGB(surface->format,i,j,i);
}
}
SDL_UnlockSurface(surface);
SDL_SaveBMP(surface, "Test.bmp");
This is just a basic test thing to allow me to get to terms with how to do this, Im sure I have some issues with memory handling here but Im not sure when if at all to delete *p. The issue that I am having the biggest problem with though is where I use SDL_MapRGB. the program crashes when it hits this with a SIGSEGV segmentation fault and I cant figure out what I am doing wrong.
You do not free the memory pointed by p.
But after use, you have to free the surface as
SDL_FreeSurface(surface);
Also, bpp is in bits. You have to divide it by 8 to get it in bytes.
And, to do arithmetic in bytes, you have to use
Uint32 *p = (Uint32 *)((Uint8 *)surface->pixels + (i * surface->pitch) + (j * bpp));

C++ - Heap Corruption on UInt32*

I am currently programming a game on C++ and am working with the SDL 2.0 library.
I am attempting to disect a 32x32 image from a texture to store as a tile and am attempting to recreate it from the pixels of a texture. When I run this code and attempt to edit the Uint32* by a for loop, I can edit it but once I try to creat the image, I get a heap corruption.
I currently have this code running:
Uint32* pixels = (Uint32*)m_pSprite->GetPixels();
int pixelCount = (m_pSprite->GetPitch() / 4) * m_pSprite->GetHeight();
int tileOffset = 0;
int spriteSheetOffset = 0;
int widthOffset = m_pSprite->GetWidth();
Uint32* tilePixels = new Uint32(32);
for (int y = 0; y < 32; y++)
{
tileOffset = (y * 32);
spriteSheetOffset = (y * widthOffset);
for (int x = 0; x < 32; x++)
{
tilePixels[tileOffset + x] = pixels[spriteSheetOffset + x];
}
}
int tilePitch = 32*4;
SDL_Texture* texture = SDL_CreateTexture(backBuffer.GetRenderer(), SDL_PIXELFORMAT_RGB888, SDL_TEXTUREACCESS_TARGET, TILE_WIDTH, TILE_HEIGHT);
I can see that there is something wrong with the Uint32* variable and that this is obviously not a best practice but I am still wrapping my head around what can and cannot be done, and what is the best way etc.
Does anyone have an explanation of what could be happening?
Uint32* tilePixels = new Uint32(32);
This is dynamically allocating a single Uint32, and initializing/constructing it to the value 32. It seems you want a 32*32 array of those. Try this:
Uint32* tilePixels = new Uint32[32*32]; // brackets allocate an array
Although, since the size of your array is static (known at compile-time), it would be best to just use a stack-allocated array instead of a dynamic one:
Uint32 tilePixels[32*32];
See if that fixes it.

Why do I get a seg fault when I try input a value in OpenCV?

So I have this piece of code:
if(channels == 3)
type = CV_32FC3;
else
type = CV_32FC1;
cv::Mat M(rows,cols,type);
std::cout<<"Cols:"<<cols<<" ColsMat:"<<M.cols<<std::endl;
float * source_data = (float*) M.data;
// copying the data into the corresponding pixel
for (int r = 0; r < rows; r++)
{
float* source_row = source_data + (r * rows * channels);
for (int c = 0; c < cols ; c++)
{
float* source_pixel = source_row + (c * channels);
for (int ch = 0; ch < channels; ch++)
{
std::cout<<"Row:"<<r<<" Col:"<<c<<" Channel:"<<ch<<std::endl;
std::cout<<"Type check: "<<typeid(T_M(0,r,c,ch)).name()<<std::endl;
float* source_value = source_pixel + ch;
*source_value = T_M(0, r, c, ch);
}
}
}
T_M is an Eigen::Tensor
I first thought that I got the error from T_M but it isn't the case.
I tried accessing *source_value and I am mostly sure that is the source of the error.
Funny thing is that I don't get the error in the end or the beginning. I get the seg fault around the middle.
For example, with rows: 915, cols: 793, and channels:1
I get the error at Row:829 Col:729 Channel:0.
I can't figure out the source of this error.
you compute your row pointer wrong, should be cols instead of rows:
float* source_row = source_data + (r * cols * channels);
In general, you must be very careful when you use a flat representation of a matrix, it's really error-prone.
The answer from Jean-François Fabre will work, if the matrix is continuous. If you can't be sure about that (e.g. if the matrix is provided by someone else, if you use submatrixes, etc.), you should use the widthstep feature to compute the row pointer:
float* source_row = (float*)(M.data + r*M.step);
this automatically uses the right number of channels, padding, etc.
even simpler is to use the row-ptr function directly:
float* source_row = (float*)(M.ptr(r));

Why am I running out of heap memory?

So I'm writing a raytracer in C++ using Jetbrains Clion IDE. When I try to create a 600 * 600 image with multisampling antialiasing enabled, I run out of memory. I get this error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information.
Code of my render function:
width: 600
height: 600
numberOfSamples: 80
void Camera::render(const int width, const int height){
int resolution = width * height;
double scale = tan(Algebra::deg2rad(fov * 0.5)); //deg to rad
ColorRGB *pixels = new ColorRGB[resolution];
long loopCounter = 0;
Vector3D camRayOrigin = getCameraPosition();
for (int i = 0; i < width; ++i) {
for (int j = 0; j < height; ++j) {
double zCamDir = (height/2) / scale;
ColorRGB finalColor = ColorRGB(0,0,0,0);
int tempCount = 0;
for (int k = 0 ; k < numberOfSamples; k++) {
tempCount++;
//If it is single sampled, then we want to cast ray in the middle of the pixel, otherwise we offset the ray by a random value between 0-1
double randomNumber = Algebra::getRandomBetweenZeroAndOne();
double xCamDir = (i - (width / 2)) + (numberOfSamples == 1 ? 0.5 : randomNumber);
double yCamDir = ((height / 2) - j) + (numberOfSamples == 1 ? 0.5 : randomNumber);
Vector3D camRayDirection = convertCameraToWorldCoordinates(Vector3D(xCamDir, yCamDir, zCamDir)).unitVector();
Ray r(camRayOrigin, camRayDirection);
finalColor = finalColor + getColorFromRay(r);
}
pixels[loopCounter] = finalColor / numberOfSamples;
loopCounter++;
}
}
CreateImage::createRasterImage(height, width, "RenderedImage.bmp", pixels);
delete pixels; //Release memory
}
I'm a beginner in C++, so I'd really appreciate your help. I also tried doing the same thing in C# in Microsoft Visual Studio and the memory usage never exceeded 200MB. I feel like I'm doing some mistake. I can provide you with more details if you want to help me.
Memory allocated using new [] must be deallocated using delete [].
Your program has undefined behavior due to use of
delete pixels; //Release memory
to deallocate memory. It needs to be:
delete [] pixels;

munmap_chunk() - Invalid pointer error

I'm writing a renderer using low-level SDL functions to learn how it all works. I am now trying to do polygon drawing, but I run into errors possibly due to my inexperience with C++. When running the code I get a munmap_chunk() - Invalid pointer error. Searching reveals that it is most likely due to free()-ing the memory twice. The error happens when returning from the function. I realize that the error comes from automatically free()ing memory which has been automatically free()d before, but I'm not experienced enough with C++ to spot the error. Any clues?
My code:
void DrawPolygon (const vector<vec3> & verts, vec3 color){
// 0. Project to the screen
vector<ivec2> vertices(verts.size());
for(int i = 0; i < verts.size(); i++){
VertexShader(verts.at(i), vertices.at(i));
}
// 1. Find max and min y-value of the polygon
// and compute the number of rows it occupies.
int miny = vertices[0].y;
int maxy = vertices[0].y;
for (int i = 1; i < 3; i++){
if (vertices[i].y < miny){
miny = vertices[i].y;
}
if (vertices[i].y > maxy){
maxy = vertices[i].y;
}
}
int rows = abs(maxy - miny) + 1;
// 2. Resize leftPixels and rightPixels
// so that they have an element for each row.
vector<ivec2> leftPixels(rows);
vector<ivec2> rightPixels(rows);
// 3. Initialize the x-coordinates in leftPixels
// to some really large value and the x-coordinates
// in rightPixels to some really small value.
for(int i = 0; i < rows; i++){
leftPixels[i].x = std::numeric_limits<int>::max();
rightPixels[i].x = std::numeric_limits<int>::min();
leftPixels[i].y = miny + i;
rightPixels[i].y = miny + i;
}
// 4. Loop through all edges of the polygon and use
// linear interpolation to find the x-coordinate for
// each row it occupies. Update the corresponding
// values in rightPixels and leftPixels.
for(int i = 0; i < 3; i++){
ivec2 a = vertices[i];
ivec2 b = vertices[(i+1)%3];
// find the number of pixels to draw
ivec2 delta = glm::abs(a - b);
int pixels = glm::max(delta.x, delta.y) + 1;
// interpolate to find the pixels
vector<ivec2> line (pixels);
Interpolate(a, b, line);
for(int j = 0; j < pixels; j++){
ivec2 p = line[j];
ivec2 cmpl = leftPixels[p.y - miny];
ivec2 cmpr = rightPixels[p.y - miny];
if(p.x < cmpl.x){
leftPixels[p.y - miny].x = p.x;
//leftPixels[p.y - miny] = cmpl;
}
if(p.x > cmpr.x){
rightPixels[p.y - miny].x = p.x;
//cmpr.x = p.x;
//rightPixels[p.y - miny] = cmpr;
}
}
}
for(int i = 0; i < leftPixels.size(); i++){
ivec2 l = leftPixels.at(i);
ivec2 r = rightPixels.at(i);
// y coord the same, iterate over x
int y = l.y;
for(int x = l.x; x <= r.x; x++){
PutPixelSDL(screen, x, y, color);
}
}
}
Using valgrind gives me this output (this is the first error it reports). Weirdly, the program recovers and keeps running with the expected result, apparently not getting the same error again:
==5706== Invalid write of size 4
==5706== at 0x40AD61: DrawPolygon(std::vector<glm::detail::tvec3<float>, std::allocator<glm::detail::tvec3<float> > > const&, glm::detail::tvec3<float>) (in /home/actimia/prog/dgi14/lab3/ThirdLab)
==5706== by 0x409C78: Draw() (in /home/actimia/prog/dgi14/lab3/ThirdLab)
==5706== by 0x409668: main (in /home/actimia/prog/dgi14/lab3/ThirdLab)
I think my previous post on similar topic would be useful.
https://stackoverflow.com/a/22658693/2724703
From your Valgrind report, it look like your program is doing memory corruption due to overflow. This does not seems like "double free" error(this is overflow scenario). You have mentioned that sometime valgrind is not reporting any error this makes this problem more difficult. However there is certainly a memory corruption and you must fix them. Memory error sometime occur intermittent due to various reason(different input parameter, multi-threaded, change of execution sequence).