How to save a c++ readable .mat file - c++

I am running a DCT code in matlab and i would like to read the compressed file (.mat) into a c code. However, am not sure this is right. I have not yet finished my code but i would like to request for an explanation of how to create a c++ readable file from my .mat file.
Am kinda confused when it comes to .mat, .txt and then binary, float details of files. Someone please explain this to me.

It seems that you have a lot of options here, depending on your exact needs, time, and skill level (in both Matlab and C++). The obvious ones are:
ASCII files
You can generate ASCII files in Matlab either using the save(filename, variablename, '-ascii') syntax, or you can create a more custom format using c-style fprintf commands. Then, within a C or C++ program the files are read using an fscanf.
This is often easiest, and good enough in many cases. The fact that a human can read the files using notepad++, emacs, etc. is a nice sanity check, (although this is often overrated).
There are two big downsides. First, the files are very large (an 8 byte double number requires about 19 bytes to store in ASCII). Second, you have to be very careful to minimize the inevitable loss of precision.
Bytes-on-a-disk
For a simple array of numbers (for example, a 32-by-32 array of doubles) you can simply use the fwrite Matlab function to write the array to a disk. Then within C/C++ use the parallel fread function.
This has no loss of precision, is pretty fast, and relatively small size on disk.
The downside with this approach is that complex Matlab structures cannot necessarily be saved.
Mathworks provided C library
Since this is a pretty common problem, the Mathworks has actually solved this by a direct C implementation of the functions needed to read/write to *.mat files. I have not used this particular library, but generally the libraries they provide are pretty easy to integrate. Some documentation can be found starting here: http://www.mathworks.com/help/matlab/read-and-write-matlab-mat-files-in-c-c-and-fortran.html
This should be a pretty robust solution, and relatively insensitive to changes, since it is part of the mainstream, supported Matlab toolset.
HDF5 based *.mat file
With recent versions of Matlab, you can use the notation save(filename, variablename, '-v7.3'); to force Matlab to save the file in an HDF5 based format. Then you can use tools from the HDF5 group to handle the file. Note a decent, java-based GUI viewer (http://www.hdfgroup.org/hdf-java-html/hdfview/index.html#download_hdfview) and libraries for C, C++ and Fortran.
This is a non-fragile method to store binary data. It is also a bit of work to get the libraries working in your code.
One downside is that the Mathworks may change the details of how they map Matlab data types into the HDF5 file. If you really want to be robust, you may want to try ...
Custom HDF5 file
Instead of just taking whatever format the Mathworks decides to use, it's not that hard create a HDF5 file directly and push data into it from Matlab. This lets you control things like compression, chunk sizing, dataset hierarchy and names. It also insulates you from any future changes in the default *.mat file format. See the h5write command in Matlab.
It is still a bit of effort to get running from the C/C++ end, so I would only go down this path if your project warranted it.

.mat is special format for the MATLAB itself.
What you can do is to load your .mat file in the MATLAB workspace:
load file.mat
Then use fopen and fprintf to write the data to file.txt and then you can read the content of that file in C.

You can also use matlab's dlmwrite to write to a delimited asci file which will be easy to read in C (and human readable too) although it may not be as compressed if that is core to the issue

Adding to what has already been mentioned you can save your data from MATLAB using -ascii.
save x.mat x
Becomes:
save x.txt x -ascii

Related

Writing Output File Cuda C++

I need to write simulation data computed on GPU into an output .csv file. Normally I would just use the fstream library but that's not possible on GPU.
Are there any built-in functions or other libraries that I could use to write data to .csv or .txt files directly from device code? Right now, performance is really not that important but rather an easy interim solution.
No, it's not possible to do direct file I/O in CUDA from device code, unless you are using something like GPU Direct Storage (GDS) (which most likely you are not, at the current time, and based on your question). If you don't already have it set up, GDS might not be an "easy interim solution".
Copy the data to the host, then use whatever file I/O routines you are comfortable with.
Note that requests for library recommendations are specifically off-topic for SO.
Use the printf statement to output the prints from Cuda kernel to a text file and then parse the text file to convert to CSV.

How to create and append to a gz file without decompressing?

I have an enormous input file, terabytes in size (it is gzipped (.gz)).
I need to read each line individually, and decide whether to add it to a new file.
The output file is also expected to be terabytes in size, but smaller since I won't add all the files.
Is there a way to do this in C++, using the standard libraries ? I don't want to use boost. Is that possible ?
The standard C++ libraries do not deal with gzip format. Neither do the standard C libraries. I don't know about boost.
But you can certainly use zlib, which I believe comes with a C++ wrapper if the use of C is too daunting.
It's not generally a good idea to append to a gzipped file, by the way, although it is theoretically possible. But you lose a lot of compression because the algorithm needs to be reset and thereby loses context. However, you can open a compressed stream and write to it, so you don't need to write the uncompressed file to disk. I think that's all you need for this query.

Embedding compressed files into a c++ program

I want to create a cross-platform installer, in c++. It can be any compression type, eg zip or gzip, embedded inside the program itself like an average installer. I don't want to create many changes on different platforms, linux and windows. How do I embed and extract files to a c++ program, cross-platform?
C++ is a poor choice for a cross-platform installer, because there's no such thing as cross-platform machine code.
C++ code can be extremely portable, but it needs to be compiled for each platform, and then you get a distinct output executable for each platform.
If you want to build installers for many platforms from a single source file, you can use C++. But if you want to build ONE installer that works on many platforms, you'll need to use an interpreted or JIT-compiled language with runtime support available on all your targets. Of those, the only one likely to already be installed on a majority of computers of each platform is Java.
Ok, assuming that you're building many single-platform installers from machine code, this is what is needed:
You need to get the compressed code into the program. You want to do this in a way that doesn't affect the load time badly, nor cause compilation to take a few months. So using an initialized global array is a bad idea.
One way is to link your data in as an additional section. There are tools to help with that, e.g. Binary to COFF converter, I've seen an ELF version as well, maybe this. But this might still cause the runtime library to try to lead the entire file into memory before execution begins.
Another way is to use platform-specific resource APIs. This is efficient, but platform specific.
The most straightforward solution is to simply append the compressed archive to your executable, then append eight more bytes with the file offset where the compressed archive begins. Unpacking is then as simple as opening the executable in read-only mode, fseek(-8, SEEK_END), reading the correct offset, then seeking to the beginning of the compressed data and passing that stream to your decompressor.
Of course, now I find a website listing pretty much the same methods.
And here's a program which implements the last option, with additional ability to store multiple files. I wouldn't recommend doing that, let the compression library take care of storing the file metadata.
The only way I know of to portably embed data (strings or raw, binary
data) in a C++ program is to convert it into a data table, then compile
that. For raw data, this would look something like:
unsigned char data[] =
{
// raw data here.
};
It should be fairly trivial to write a small program which reads your
binary data, and writes it out as a C++ table, like the above. Compile
it, and link it into your program, and there you are.
Use zlib.
Have your packing program generate a list of exe's in the program. i,e.
unsigned char x86_windows_version[] = { 0xff,...,0xff};
unsigned char arm_linux_version[] = { 0xff,...,0xff};
unsigned char* binary_files[MAX_BINARIES] = {x86_windows_version,arm_linux_version};
somewhere in your excitable:
enflate(x86_windows_version);
And thats about it. Look at the zlib docs for the parameters for enflate() and deflate() and that's about it.
It's a pattern used a lot on embedded platforms (that are not linux) mostly for string _tables and other image binaries. It should work for your needs.

Simple API for random access into a compressed data file

Please recommend a technology suitable for the following task.
I have a rather big (500MB) data chunk, which is basically a matrix of numbers. The data entropy is low (it should be well-compressible) and the storage is expensive where it sits.
What I am looking for, is to compress it with a good compression algorithm (Like, say, GZip) with markers that would enable very occasional random access. Random access as in "read byte from location [64bit address] in the original (uncompressed) stream". This is a little different than the classic deflator libraries like ZLIB, which would let you decompress the stream continuously. What I would like, is have the random access at latency of, say, as much as 1MB of decompression work per byte read.
Of course, I hope to use existing library rather than reinvent the NIH wheel.
If you're working in Java, I just published a library for that: http://code.google.com/p/jzran.
Byte Pair Encoding allows random access to data.
You won't get as good compression with it, but you're sacrificing adaptive (variable) hash trees for a single tree, so you can access it.
However, you'll still need some kind of index in order to find a particular "byte". Since you're okay with 1 MB of latency, you'll be creating an index for every 1 MB. Hopefully you can figure out a way to make your index small enough to still benefit from the compression.
One of the benefits of this method is random access editing too. You can update, delete, and insert data in relatively small chunks.
If it's accessed rarely, you could compress the index with gzip and decode it when needed.
If you want to minimize the work involved, I'd just break the data into 1 MB (or whatever) chunks, then put the pieces into a PKZIP archive. You'd then need a tiny bit of front-end code to take a file offset, and divide by 1M to get the right file to decompress (and, obviously, use the remainder to get to the right offset in that file).
Edit: Yes, there is existing code to handle this. Recent versions of Info-zip's unzip (6.0 is current) include api.c. Among other things, that includes UzpUnzipToMemory -- you pass it the name of a ZIP file, and the name of one of the file in that archive that you want to retrieve. You then get a buffer holding the contents of that file. For updating, you'll need the api.c from zip3.0, using ZpInit and ZpArchive (though these aren't quite as simple to use as the unzip side).
Alternatively, you can just run a copy of zip/unzip in the background to do the work. This isn't quite as neat, but undoubtedly a bit simpler to implement (as well as allowing you to switch formats pretty easily if you choose).
Take a look at my project - csio. I think it is exactly what you are looking for: stdio-like interface and multithreaded compressor included.
It is library, writen in C, which provides CFILE structure and functions cfopen, cfseek, cftello, and others. You can use it with regular (not compressed) files and with files, compressed with help of dzip utility. This utility included in the project and written in C++. It produces valid gzip archive, wich can be handled by standard utilities as well as with csio. dzip can compress in many threads (see -j option), so it can very fast compress very big files.
Tipical usage:
dzip -j4 myfile
...
CFILE file = cfopen("myfile.dz", "r");
off_t some_offset = 673820;
cfseek(file, some_offset);
char buf[100];
cfread(buf, 100, 1, file);
cfclose(file);
It is MIT licensed, so you can use it in your projects without restrictions. For more information visit project page on github: https://github.com/hoxnox/csio
Compression algorithms usually work in blocks I think so you might be able to come up with something based on block size.
I would recommend using the Boost Iostreams Library. Boost.Iostreams can be used to create streams to access TCP connections or as a framework for cryptography and data compression. The library includes components for accessing memory-mapped files, for file access using operating system file descriptors, for code conversion, for text filtering with regular expressions, for line-ending conversion and for compression and decompression in the zlib, gzip and bzip2 formats.
The Boost library been accepted by the C++ standards committee as part of TR2 so it will eventually be built-in to most compilers (under std::tr2::sys). It is also cross-platform compatible.
Boost Releases
Boost Getting Started Guide NOTE: Only some parts of boost::iostreams are header-only library which require no separately-compiled library binaries or special treatment when linking.
Sort the big file first
divide it in chunks of your desire size (1MB) with some sequence in the name (File_01, File_02, .., File_NN)
take first ID from each chunk plus the filename and put both data into another file
compress the chunks
you will able to made a search into the ID's file using the method that you wish, may be a binary search and open each file as you need.
If you need a deep Indexing you could use a BTree algorithm with the "pages" are the files.
on the web exists several implementation of this because are little tricky the code.
You could use bzip2 and make your own API pretty easily based on the James Taylor's seek-bzip2

How to decompress a file in fortran77?

I have a compressed file.
Let's ignore the tar command because I'm not sure it is compressed with that.
All I know is that it is compressed in fortran77 and that is what I should use to decompress it.
How can I do it?
Is decompression a one way road or do I need a certain header file that will lead (direct) the decompression?
It's not a .Z file. It ends at something else.
What do I need to decompress it? I know the format of the final decompressed archive.
Is it possible that the file is compressed thru a simple way but it appears with a different extension?
First, let's get the "fortran" part out of the equation. There is no standard (and by that, I mean the fortran standard) way to either compress or decompress files, since fortran doesn't have a compression utility as part of the language. Maybe someone written some of their own, but that's entirely up to him.
So, you're stuck with publicly available compression utilities, and such. On systems which have those available, and on compilers which support it (it varies), you can use the SYSTEM function, which executes the system command by passing a command string to the operating system's command interpreter (I know it exists in cvf, probably ivf ... you should probably look it up in help of your compiler).
Since you asked a similar question already I assume you're still having problem with this. You mentioned that "it was compressed with fortran77". What do you mean by that ? That someone builded a compression utility in f77 and used it ? So that would make it a custom solution ?
If it's some kind of a custom solution, then it can practically be anything, since a lot of algorithms can serve as "compression algorithms" (writing file as binary compared to plain text will save a few bytes; voila, "compression")
Or have I misunderstood something ? Please, elaborate this a little.
My guess is that you have a binary file, which is output by a Fortran program. These can look like compressed files because they are not readable in a text editor.
Fortran allows you to write the in-memory data out to a file without formatting it, so that you can reload it later without having to parse it. The problem, however, is that you need that original source code in order to see what types of variables are written in the file.
If you have no access to the fortran source code, but a lot of time to spare, you could write some simple fortran program and guess what types of variables are being used. I wouldn't advise it, though, as Fortran is not very forgiving.
If you want some simple source code to try, look at this page which details binary read and write in Fortran, and includes a code sample. Just start by replacing reclength=reclength*4 with reclength=reclength*2 for a double precision real.
There is no standard decompression method, there are tons. You will need to know the method used to compress it in order to decompress it.
You said that the file extension was not .Z, but something else. What was that something else?
If it's .gz (which is very common on Unix systems), "gunzip" is the proper command. If it's .tgz, you can gunzip and untar it. (Or you can read the man page for tar(1), since it probably has the ability to gunzip and extract together.)
If it's on Windows, see if Windows can read it directly, as the file system itself appears to support the ZIP format.
If something else, please just list the file name (or, if there are security implications, the file name beginning with the first period), and we might be able to figure it out.
You can check to see if it's a known compressed file type with the file command. Assuming file returns something like "binary file" then you're almost certainly looking at plain binary data.