segmentation fault when compling c++ code - c++

I am now days struggling at this question from my code,the code could be complied successfully but when I ran the binary file, the segmentation fault would occured and here below was the problem:
Program received signal SIGSEGV, Segmentation fault. _int_malloc (av=av#entry=0x7ffff6adfb20 <main_arena>, bytes=bytes#entry=15859713) at malloc.c:3802 malloc.c: No such file or directory.
Env:ubuntu 16.04 VM workstation
Com: g++, version:5.4.0
c++:c++11
Lib: imebra 5.0.1
Here is my code:
#include <imebra/imebra.h>
#include <iostream>
#include <fstream>
#include <stdlib.h>
#define img_height 2816
#define img_width 2816
#define img_bit 2
#define img_size img_height*img_width*img_bit //15.1MB
using namespace std;
//MONOCHROME1: indicates that the greyscale ranges from bright to dark with ascending pixel values
//MONOCHROME2: indicates that the greyscale ranges from dark to bright with ascending pixel values
/*
create an Image object
fill the image object with raw data
create a DICOM dataset
add the image to the DICOM dataset
fill all the necessary DICOM tags (e.g. sop class, instance, patient name, etc)
save the DICOM dataset
*/
int main()
{
//ifstream mydata("/home/lixingyu/GH1.raw",ios::binary);
//uint32_t *pImgData = (uint32_t *)malloc(img_size*sizeof(uint32_t));
//mydata.read(pImgData,img_size);
FILE *fp = NULL;
fp = fopen("/home/lixingyu/123.raw","rb");
uint32_t *pImgData = new (std::nothrow) uint32_t (img_size);
fread(pImgData,sizeof(uint32_t),img_size,fp);
cout<<"success"<<endl;
/*---------program stop here -------*/
// Creat an image 500 pixels wide , 400 pixels height
// each sample is a 16 bit unsigned value, the colorspace
// is monochrome_2, the higher bit used is 15
// imebra ::MutableImage image(500,400,imebra::bitDepth_t::depthU16,"MONOCHROME_2",15);
imebra ::MutableImage image(img_height,img_width,imebra::bitDepth_t::depthU16,"MONOCHROME2",15);
// 1. Fill the image with data
// We use a writing data handler to write into the image.
// The data is committed into the image only when the writing
// data handler goes out of scope.
imebra::WritingDataHandlerNumeric writeIntoImage(image.getWritingDataHandler());
for (size_t y=0;y!=img_width;++y)
{
for (size_t x=0; x!= img_height; ++x)
{
writeIntoImage.setUnsignedLong(y*img_height+x,pImgData[y*img_height+x]);
}
}
// specify the tansfer syntax and the charset
imebra::charsetsList_t charsets;
charsets.push_back("ISO 2022 IR 6");
//Explicit VR little endian
imebra::MutableDataSet dataSet("1.2.840.10008.1.2.1",charsets);
// add the image to the dataSet
dataSet.setImage(0,image,imebra::imageQuality_t::veryHigh);
// set the patient name
dataSet.setUnicodePatientName(imebra::TagId(imebra::tagId_t::PatientName_0010_0010),imebra::UnicodePatientName(L"fjx",L"",L""));
// save to a file
imebra::CodecFactory::save(dataSet,"GH1.dcm",imebra::codecType_t::dicom);
free(pImgData);
}
When I use gdb to debug my code, the question occured and I have changed my stack size to 100MB,but then the segementation fault would occure.
Maybe something wrong with Dynamic memory application??
Could anyone help me out?
FYI,The func of imebra::XXX are all from imebra lib.

You are not allowed to call free on memory allocated by new. That causes undefined behavior. You must call delete instead.
You are also allocating only one uint32_t (and initialing it with the value img_size), not an array of img_size many. For that you would need new (std::nothrow) uint32_t[img_size]; instead (and later delete[] instead of delete). So you are going to write out-of-bounds with fread.
You also need to check that the return value of new(std::nothrow) is not a null pointer, which would happen on allocation failure. If you use the throwing version, then you won't need that check.
Please don't use new like this though and use std::vector instead. malloc in C++ is even worse than new.
Similarly, don't use the C IO library in C++. Use std::ifstream instead.

Related

Need example of jpeglib-turbo that works in VS2013 x64

I'm trying to learn how to use the jpeg-turbo library. And I'm have a devil of a time getting started.
The example.c example in the doc folder, and every single example I find on the web, crashes in VS2013 when I try to read a .jpg file.
They compile fine. But when I run them they crash with an access violation error.
What I really need is a tiny working (beginner friendly) example that is known to run properly in VS2013 x64. Including the main(){} code block code.
And if there's anything special in the VS project properties that I might need to set that could be causing this crashing.
I'm pulling my hair out just trying to get one simple example working.
Thanks for the help.
*Edit-- Here is a very small example.
I've also tried to get jpeglib to run with and without using Boost/GIL
But it always crashes when loading the image: exception at 0x00000000774AE4B4 (ntdll.dll)
#include <stdio.h>
#include <assert.h>
#include <jpeglib.h>
#pragma warning(disable: 4996)
int main(int argc, char* argv[])
{
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
JSAMPARRAY buffer;
int row_stride;
//initialize error handling
cinfo.err = jpeg_std_error(&jerr);
FILE* infile;
infile = fopen("source.jpg", "rb");
assert(infile != NULL);
//initialize the decompression
jpeg_create_decompress(&cinfo);
//specify the input
jpeg_stdio_src(&cinfo, infile);
//read headers
(void)jpeg_read_header(&cinfo, TRUE);
jpeg_start_decompress(&cinfo); <----This guy seems to be the culprit
printf("width: %d, height: %d\n", cinfo.output_width, cinfo.output_height);
row_stride = cinfo.output_width * cinfo.output_components;
buffer = (*cinfo.mem->alloc_sarray)
((j_common_ptr)&cinfo, JPOOL_IMAGE, row_stride, 1);
JSAMPLE firstRed, firstGreen, firstBlue; // first pixel of each row, recycled
while (cinfo.output_scanline < cinfo.output_height)
{
(void)jpeg_read_scanlines(&cinfo, buffer, 1);
firstRed = buffer[0][0];
firstBlue = buffer[0][1];
firstGreen = buffer[0][2];
printf("R: %d, G: %d, B: %d\n", firstRed, firstBlue, firstGreen);
}
jpeg_finish_decompress(&cinfo);
return 0;
}
I found the problem.
In my VS project's Linker->Input->Additional Dependencies. I changed it to use turbojpeg-static.lib. Or jpeg-static.lib when I'm using the non turbo enhanced libraries. The turbojpeg.lib or jpeg.lib crashes for some reason when reading the image. FYI, I am using the libjpeg-turbo-1.4.2-vc64.exe version with VS2013. And this is how I got it to work.
One more very important thing that I learned that I'd like to share.
When writing to a new .jpg image. If the new image size is different than the source image. It will typically crash. Especially if the new size is larger than the source. I'm guessing this happens because it takes a much longer time to re-sample the color data to a different size. So this type of action might require it's own thread to prevent crashing. I wasted a lot of time chasing code errors and compiler settings due to this one. So watch out for that one.

Writing MAT files: Access violation writing location after 508 successful calls

I'm running a 64-bit C++ program in VS2012 that processes images and writes the results to a MAT file. For whatever reason, after 508 working iterations, I get:
"Unhandled exception at ____ (libmat.dll) in Program.exe:____. Access violation writing location ____." (Underscores represent address locations)
However, if I restart the program on image number 509 (changing nothing else; just a restart), it works just fine for the next 508 images and then hands me the same error again.
A comment on an earlier, less-detailed post said it may be some memory issue. Perhaps I'm not handling garbage collection properly? I can't figure it out though.
For the record, all of the data being saved to files ends up in a 127x47 (row x col) double matrix. That means each of the 508 successful files contained 5969 doubles (plus whatever metadata goes into a MAT file). Perhaps some memory limit gets reached because I don't clear it properly?
The code in question is below:
void writeMat (void * data, int rows, int cols, std::string fname)
{
// Copies data to MATLAB format matrix
mxArray * mat;
mat = mxCreateDoubleMatrix(rows, cols, mxREAL);
memcpy((void*)mxGetPr(mat), data, rows * cols * sizeof(double));
// Creates output file
MATFile * output;
std::string matFilename = fname + ".mat"; // Output filename
std::string varName = "tmp"; // Storage variable in MAT file
output = matOpen(matFilename.c_str(), "w"); // Opens MAT file for writing
if (output == NULL) {
printf("Error creating file");
}
// Adds data variable to MAT file
int status = matPutVariable(output, varName.c_str(), mat);
if (status != 0)
{
printf("Error writing mat file");
}
mxDestroyArray(mat); // Free up memory
}
Any help would be appreciated. Thanks in advance!
It appears that you are running out of file handles, because you keep calling matOpen but then don't subsequently call matClose. Most systems impose an upper limit on the number of concurrently open files - it would appear that on your system this limit is 512 - there are already a few files open, so when you get to around the 508th iteration you run out of file handles.
Having said that, you should not see a crash - you have error checking on matOpen and this should fail gracefully when you try to open too many files, but evidently it doesn't!

8bpp BMP - refering pixels to the color table; want to read only one row of pixels; C++

I have a problem with reading 8bit grayscale bmp. I am able to get info from header and to read the palette, but I can't refer pixel values to the palette entries. Here I have found how to read the pixel data, but not actually how to use it in case of bmp with a palette. I am a beginner. My goal is to read only one row of pixels at a time.
Code:
#include <iostream>
#include <fstream>
using namespace std;
int main(int arc, char** argv)
{ const char* filename="Row_tst.bmp";
remove("test.txt");
ofstream out("test.txt",ios_base::app);//file for monitoring the results
FILE* f = fopen(filename, "rb");
unsigned char info[54];
fread(info, sizeof(unsigned char), 54, f); // read the header
int width = *(int*)&info[18];
int height = *(int*)&info[22];
unsigned char palette[1024]; //read the palette
fread(palette, sizeof(unsigned char), 1024, f);
for(int i=0;i<1024;i++)
{ out<<"\n";
out<<(int)palette[i];
}
int paletteSmall[256]; //1024-byte palette won't be needed in the future
for(int i=0;i<256;i++)
{ paletteSmall[i]=(int)palette[4*i];
out<<paletteSmall[i]<<"\n";
}
int size = width;
//for(int j=0;j<height;j++)
{ unsigned char* data = new unsigned char[size];
fread(data, sizeof(unsigned char), size, f);
for(int i=0;i<width;i++)
{ cout<<"\n"<<i<<"\t"<<paletteSmall[*(int*)&data[i]];
}
delete [] data;
}
fclose(f);
return 0;
}
What I get in the test.txt seems fine - first values from 0 0 0 0 to 255 255 255 0 (palette), next values from 0 do 255 (paletteSmall).
The problem is that I can't refer pixel values to the color table entries. My application callapses, with symptoms indicating, probably, that it tried to use some unexisting element of a table. If I understand properly, a pixel from a bmp with a color table should contain a number of a color table element, so I have no idea why it doesn't work. I ask for your help.
You are forcing your 8-bit values to be read as int:
cout<<"\n"<<i<<"\t"<<paletteSmall[*(int*)&data[i]];
The amount of casting indicates you were having problems here and probably resolved to adding one cast after another until "it compiled". As it turns out, compiling without errors is not the same as working without errors.
What happens here is that you force the data pointer to read 4 bytes (or as much as your local int size is, anyway) and so the value will almost always exceed the size of paletteSmall. (In addition, the last couple of values will be invalid under all circumstances, because you read bytes from beyond the valid range of data.)
Because the image data itself is 8-bit, all you need here is
cout<<"\n"<<i<<"\t"<<paletteSmall[data[i]];
No casts necessary; data is an unsigned char * so its values are limited from 0 to 255, and paletteSmall is exactly the correct size.
On Casting
The issue with casting is that your compiler will complain if you tell it flat out to treat a certain type of value as if it is another type altogether. By using a cast, you are telling it "Trust me. I know what I am doing."
This can lead to several problems if you actually do not know :)
For example: a line such as your own
int width = *(int*)&info[18];
appears to work because it returns the proper information, but that is in fact a happy accident.
The array info contains several disconnected unsigned char values, and you tell your compiler that there is an int stored starting at position #18 – it trusts you and reads an integer. It assumes that (1) the number of bytes that you want to combine into an integer is in fact the number of bytes that itself uses for an int (sizeof(int)), and (2) the individual bytes are in the same order as it uses internally (Endianness).
If either of these assumptions is false, you can get surprising results; and almost certainly not what you wanted.
The proper procedure is to scan the BMP file format for how the value for width is stored, and then using that information to get the data you want. In this case, width is "stored in little-endian format" and at offset 18 as 4 bytes. With that, you can use this instead:
int width = info[18]+(info[19]<<8)+(info[20]<<16)+(info[21]<<24);
No assumptions on how large an int is (except that it needs to be at least 4 bytes), no assumption on the order (shifting values 'internally' do not depend on endianness).
So why did it work anyway (at least, on your computer)? The most common size for an int in this decade is 4 bytes. The most popular CPU type happens to store multi-byte values in the same order as they are stored inside a BMP. Add that together, and your code works, on most computers, in this decade. A happy accident.
The above may not be true if you want to compile your code on another type of computer (such as an embedded ARM system that uses another endianness), or when the used compiler has a smaller (.. which by now would be a very old compiler) or a larger size for int (just wait another 10 years or so), or if you want to adjust your code to read other types of files (which will have parameters of their own, and the endianness used is one of them).

Run-time check Failure #2 , while using API for saving image frame

MSVS 2010 , Windows 7
I am using an API to access camera features.
The following function displays a frame and saves it.
void DisplayThread::OnBufferDisplay( PvBuffer *aBuffer )
{
mDisplayWnd->Display( *aBuffer ); //displaying frame
//Now let us try to save the frame with name of the form %Y%m%d%H%M%S.bmp
system("mkdir D:\\ABCD" );
struct tm *tm;
int count;
time_t t;
char str_time[20];
t = time(NULL);
tm = localtime(&t);
strftime(str_time, sizeof(str_time), "%Y%m%d%H%M%S.bmp", tm); //name of the frame
char name[1000]; //sufficient space
sprintf(name,"%s",str_time);
char path[]="D:\\ABCD";
strcat(path,name); //path =path+"\\"+name;
// char* str=(char*)(void*)Marshal::StringToHGlobalAnsi(path);
PvString lFilename( path );
PvString lCompleteFileName( lFilename );
PvBufferWriter lBufferWriter; //The following function saves image
PvResult lResult = lBufferWriter.Store( aBuffer, lCompleteFileName, PvBufferFormatBMP );
}
The name of the bmp file that is saved is of the form %Y%m%d%H%M%S.bmp
The program builds perfectly fine , even display is coming correctly,
but the following error message pops up:
It looks like something is wrong with the memory allocation with the variable 'name'.
But I have allocated sufficient space, even then I am getting this error.
Why it is happening ?
Kindly let me know if more info is required to debug this.
Note: The value returned by lBufferWriter.Store() is 'OK' (indicating that buffer/frame writing was successful), but no file is getting saved. I guess this is because of the run-time check failure I am getting.
Please help.
Your path[] array size is 8 and it is too small to hold the string after concatenation.
As this path variable is on the stack, it is corrupting your stack.
So, your buffer should be large enough to hold the data that you want to put into it.
In your case Just change the line to:
char path[1024]="D:\\ABCD";

Reading in raw encoded nrrd data file into double

Does anyone know how to read in a file with raw encoding? So stumped.... I am trying to read in floats or doubles (I think). I have been stuck on this for a few weeks. Thank you!
File that I am trying to read from:
http://www.sci.utah.edu/~gk/DTI-data/gk2/gk2-rcc-mask.raw
Description of raw encoding:
hello://teem.sourceforge.net/nrrd/format.html#encoding (change hello to http to go to page)
- "raw" - The data appears on disk exactly the same as in memory, in terms of byte values and byte ordering. Produced by write() and fwrite(), suitable for read() or fread().
Info of file:
http://www.sci.utah.edu/~gk/DTI-data/gk2/gk2-rcc-mask.nhdr - I think the only things that matter here are the big endian (still trying to understand what that means from google) and raw encoding.
My current approach, uncertain if it's correct:
//Function ripped off from example of c++ ifstream::read reference page
void scantensor(string filename){
ifstream tdata(filename, ifstream::binary); // not sure if I should put ifstream::binary here
// other things I tried
// ifstream tdata(filename) ifstream tdata(filename, ios::in)
if(tdata){
tdata.seekg(0, tdata.end);
int length = tdata.tellg();
tdata.seekg(0, tdata.beg);
char* buffer = new char[length];
tdata.read(buffer, length);
tdata.close();
double* d;
d = (double*) buffer;
} else cerr << "failed" << endl;
}
/* P.S. I attempted to print the first 100 elements of the array.
Then I print 100 other elements at some arbitrary array indices (i.e. 9,900 - 10,000). I actually kept increasing the number of 0's until I ran out of bound at 100,000,000 (I don't think that's how it works lol but I was just playing around to see what happens)
Here's the part that makes me suspicious: so the ifstream different has different constructors like the ones I tried above.
the first 100 values are always the same.
if I use ifstream::binary, then I get some values for the 100 arbitrary printing
if I use the other two options, then I get -6.27744e+066 for all 100 of them
So for now I am going to assume that ifstream::binary is the correct one. The thing is, I am not sure if the file I provided is how binary files actually look like. I am also unsure if these are the actual numbers that I am supposed to read in or just casting gone wrong. I do realize that my casting from char* to double* can be unsafe, and I got that from one of the threads.
*/
I really appreciate it!
Edit 1: Right now the data being read in using the above method is apparently "incorrect" since in paraview the values are:
Dxx,Dxy,Dxz,Dyy,Dyz,Dzz
[0, 1], [-15.4006, 13.2248], [-5.32436, 5.39517], [-5.32915, 5.96026], [-17.87, 19.0954], [-6.02961, 5.24771], [-13.9861, 14.0524]
It's a 3 x 3 symmetric matrix, so 7 distinct values, 7 ranges of values.
The floats that I am currently parsing from the file right now are very large (i.e. -4.68855e-229, -1.32351e+120).
Perhaps somebody knows how to extract the floats from Paraview?
Since you want to work with doubles, I recommend to read the data from file as buffer of doubles:
const long machineMemory = 0x40000000; // 1 GB
FILE* file = fopen("c:\\data.bin", "rb");
if (file)
{
int size = machineMemory / sizeof(double);
if (size > 0)
{
double* data = new double[size];
int read(0);
while (read = fread(data, sizeof(double), size, file))
{
// Process data here (read = number of doubles)
}
delete [] data;
}
fclose(file);
}