4D Matrix from Matlab to OpenCV - c++

I have to export matrices from Matlab to OpenCV. I use the yaml format and then read the file in OpenCV with cv::FileStorage modelFile, and store the data in cv::Mat variables. For normal 2D Matrices, it works fine. But, for one of my big 4D Matrix, I get errors that the string is too long. The Matrix has the size of 16|16|70409|8.
Does someone know a good way to store it? Maybe it is only a format problem.
The code is:
for i = 1:matrixSize(1)
for j=1:matrixSize(2)
fprintf( file, ' - [');
for a = 1:matrixSize(3)
for b = 1:matrixSize(4)
fprintf( file, '%.6f', A(i,j,a,b));
if(a ~= matrixSize(3))
fprintf( file, ',');
end
end
end
fprintf( file, ']\n');
end
end
Thanks

my solution is to use instead of yaml, save the model in binary format and then read it with the normal fread functions.
Of course you have to know the size of each matrices.
fileID = fopen(BinModel,'w');
fwrite(fileID,[size(model.nSegs),0,0],'uint32'); % size of the matrix
fwrite(fileID,model.nSegs,'uint8'); % matrix data
The file shrinks from 1.4 GB to 200 MB.
Saludo

Related

Is there a way to convert an image in the FLIR proprietary .seq format into .png?

I'm trying to convert a frame in .SEQ format into .PNG
I used the pleora ebus-sdk to automatically find and connect to my gigE FLIR camera.
Then I used:
PvBuffer* data = (PvBuffer*)myFLIR.GetFrameData();
void* data1 = myFLIR.PrepareTauSEQ(data);
size_t size = sizeof(FFF_FILE_HEADER) + sizeof(BI_DATA_T) + sizeof(GEOMETRIC_INFO_T) + data->GetAcquiredSize();
If I write data1 with size to a .seq file using std::ofstream, I can then open this .seq file in any FLIR software with no problem.
But, I want to have a PNG instead of a SEQ. How should I do this?

Having problems loading a jpg file using libjpeg

I need to load jpg files in my application. I used libjpeg to save JPGs (from processed raw files) and it works nicely.
Reading them though is a different issue. I am getting very weird results, the image is very distorted, in 12 columns, which are mostly gray scale.
I followed the example, and the only modification I made is how to put the data in my buffer (the put_scanline_someplace() function is missing from the example.
Here is my relevant code (I need the data in BGR format):
dest=0;
while(cinfo.output_scanline < cinfo.output_height)
{
jpeg_read_scanlines(&cinfo, buffer, 1);
src=0;
for(i=0;i<cinfo.output_width;i++)
{
image_buffer[dest*3+2]=buffer[src*3+0];
image_buffer[dest*3+1]=buffer[src*3+1];
image_buffer[dest*3+0]=buffer[src*3+2];
src++;
dest++;
}
}
Is there something wrong with this code?
I found the solution. buffer isa pointer to an array of ints, so the code that works is like so:
image_buffer[dest*3+2]=buffer[0][src*3+0];
image_buffer[dest*3+1]=buffer[0][src*3+1];
image_buffer[dest*3+0]=buffer[0][src*3+2];

Why there is always 4.5 times difference between RAM and hard disk size of a cv::Mat?

First of all, I use C++. I have a CV_32F cv::Mat and when I write it down to disk using FileStorage, the size of this Mat becomes around 4.5 times higher than the size of it when it was on RAM during the program execution. I do some experiments and each time it is like that. So, when I tried to read it again, obviously my RAM(6 GB) becomes insufficient ,though it was not during the program execution.
Here is how I write it down to the disk:
FileStorage fs( PATH, FileStorage::WRITE);
fs << "concatMat" << concatMat;
fs.release();
And this is how I calculate the occupied RAM size during the program execution:
size_t sz= sizeof( concatMat) + concatMat.total()*sizeof( CV_32F);
I wonder the reason behind this, especially why there is always 4.5 times difference?
EDIT: I save them with .bin extension, not YAML or XML. I need to save them efficiently and can take recommendations.
Take a look at the contents of your XML or YML or .bin file with Notepad++. (By the way, if you specify a path ending in .bin, OpenCV will write it in a YAML format...)
You will see that each float from your CV_32F Mat has been written in a format like this 6.49999976e-001. This represents 15 bytes instead of the 4 bytes expected for a float. This is a ratio of 15 / 4 = 3.75. If you add to that all the characters for formatting like ',' '\n' or ' ', you may reach a size that is more than 4 times bigger than what you had on RAM.
If you try to save a Mat with only zeros inside, you will see that the size is quite similar to what you had in RAM because the zeros are written 0.. It is actually smaller if you save it in XML format.

Stitch many images in one using Libtiff

I want to copy many images in one big image in different positions and then save it to .tif, but I don't want to store them in memory all at once, so I need to define tiff file stored on disk, and then write my images one by one.
I'm using Libtiff. The problem that I can't see method that can write pixels to some address with displacement like
RGB* p= init_pointer+dy*width+dx
Is there any solution? (Maybe I should create just binary file and write header by my own, but it's a hard way, maybe libtiff can make it easier?)
So if I rephrase my question: how to obtain raw pointer to tiff data stored on disk for write pixel data?
For example I can create file on disk and write header by hand, and then write my images with offset using raw pointer, something like:
FILE* f = _tfopen(fileName, _T("wb"));
uint64 fileSize = headerSize + (uint64)im.Width() * im.Height() * (grayscale ? 1 : 3);
// Allocate file space
if( _fseeki64(f, fileSize - 1, SEEK_SET) == 0 &&
fputc(0, f) != EOF )
{
_fseeki64(f, 0, SEEK_SET);
// Write header
//write img using pointer with offset
int64 posPixels = headerSize + (rc.top - rcClip.top) * width + (rc.left - rcClip.left);
Once more time: I need to write many images into one image of tiff format like this http://upload.wikimedia.org/wikipedia/commons/a/a0/Rochester_NY.jpg and I must avoid creation of large image in RAM, so I need to write images one by one to file(only one image in RAM in the same time) and I trying to do this using libtiff.
Or another simple example: I have main image M(10000,10000) and I have some image m1(200,200) and I need to write image m1 to M at location (200,300)

Looking at binary output from fortran on gnuplot

So, I created a binary file with fortran, using something similar to this:
open (3,file=filename,form="unformatted",access="sequential")
write(3) matrix(i,:)
The way I understand it, fortran pads the file with 4 bytes on either end of the file, and the rest is just the data that I want (in this case, a list of 1000 doubles).
I want to read this in with gnuplot, however, I don't know how to get gnuplot to skip the first and last 4 bytes, and read the rest in as doubles. The documentation isn't very helpful in this regard.
Thanks
Andrew: I see no reason to make gnuplot handle those extra bytes before/after your data. Either Fortran does not do this padding, or it does and gnuplot handles it without a hassle.
I've had a similar problem, and Google searches always brought me back here. I figured I'd better post my solution in case the same happens to other people.
I've been trying to make a 2D colormap plot using gnuplot's "plot 'file.dat' matrix with image" command. My ASCII output files were too big, so I wanted to use binary files instead. What I did was something like the following:
in fortran:
implicit none
real, dimension(128,128) :: array
integer :: irec
! ... initialize array ...
inquire( iolength=irec ) array
open( 36, 'out.dat', form='unformatted', access='direct', recl=irec )
write( 36, rec=1 ) array
close( 36, status='keep' )
in gnuplot:
plot 'out.dat' binary array=128x128 format="%float" with image
Notes:
By default, gnuplot assumes single precision in binary files. If your
fortran program outputs in double precision, simply change "%float"
to "%double".
My program used double precision data in the array, but output files
were too big. Since images based on double or single precision are
indistinguishable to the eye, and double-precision data files are
large, I converted my double-precision data to single-precision data
before writing it to a file.
You may have to adapt the gnuplot command depending on what
you want to do with the matrix, but this loads it in and plots it
well. This did what I needed it to do, and I hope it helps anyone
else who has a similar problem.
As you can see, if Fortran adds extra bytes before/after your data,
gnuplot seems to read in the data without making you take those extra
bytes into account.
It might be easier to use direct I/O instead of sequential:
inquire (iolength = irec) matrix(1,:) !total record length for a row
open (3, file=filename, form="unformatted", access="direct", recl=irec)
write(3, rec=1) matrix(i,:)
The inquire statement gives you the length of the output list in 'recl' units. As such, the whole list fits in one record of length irec.
For writing a matrix to file column-wise you can then do:
inquire (iolength = irec) matrix(:,1)
open (3, file=filename, form="unformatted", access="direct", recl=irec)
do i=1,ncol
write(3, rec=i) matrix(:,i)
end do
or row-wise:
inquire (iolength = irec) matrix(1,:)
open (3, file=filename, form="unformatted", access="direct", recl=irec)
do i=1,nrow
write(3, rec=i) matrix(i,:)
end do
or element-wise:
inquire (iolength = irec) matrix(1,1)
open (3, file=filename, form="unformatted", access="direct", recl=irec)
do j=1,ncol
do i=1,nrow
write(3, rec=j+(ncol-1)*i) matrix(i,j)
end do
end do
or dump the entire matrix:
inquire (iolength = irec) matrix
open (3, file=filename, form="unformatted", access="direct", recl=irec)
write(3, rec=1) matrix
Testing with gnuplot 5.0, the following fortran unformatted data write of a double array x of size N,
open(FID,file='binaryfile')
do k = 1, N
write(FID) x(k)
end do
close(FID)
can be understood by gnuplot with the following:
plot 'binaryfile' binary format="%*1int%double%*1int"
The %*1int means, skip once a four byte integer, effectively skipping the header and footer data fortran wraps around output.
For more information and extrapolation for more complicated data, see the gnuplot 5.0 docs on binary and see the size of the formats with, show datafile binary datasizes. Note however that multi-column data (i.e. N doubles per write) can be accessed with the same format as above but as %Ndoubles where N is an integer. Then with using 1:3 for example, one would plot the first column against the 3rd.