How to exchange data between C++ and MATLAB? - c++

For the now being I am developing a C++ program based on some MATLAB codes. During the developing period I need to output the intermediate results to MATLAB in order to compare the C++ implementation result with the MATLAB result. What I am doing now is to write a binary file with C++, and then load the binary file with MATLAB. The following codes show an example:
int main ()
{
ofstream abcdef;
abcdef.open("C:/test.bin",ios::out | ios::trunc | ios::binary);
for (int i=0; i<10; i++)
{
float x_cord;
x_cord = i*1.38;
float y_cord;
y_cord = i*10;
abcdef<<x_cord<<" "<<y_cord<<endl;
}
abcdef.close();
return 0;
}
When I have the file test.bin, I can load the file automatically with MATLAB command:
data = load('test.bin');
This method can work well when numerical data is the output; however, it could fail if the output is a class with many member variables. I was wondering whether there are better ways to do the job not only for simple numerical data but also for complicated data structure. Thanks!

I would suggest the use of MATLAB engine through which you can pass data to MATLAB on real time basis and can even visualize the data using various graph plotting facilities available in MATLAB.
All you have to do is to invoke the MATLAB engine from C/C++ program and then you can easily execute MATLAB commands directly from the C/C++ program and/or exchange data between MATLAB and C/C++. It can be done in both directions i.e. from C++ to MATLAB and vice versa.
You can have a look at a working example for the same as shown here.

I would suggest using the fread command in matlab. I do this all the time for exchanging data between matlab and other programs, for instance:
fd = fopen('datafile.bin','r');
a = fread(fd,3,'*uint32');
b = fread(fd,1,'float32');
With fread you have all the flexibility to read any type of data. By placing a * in the name, as above, you also say that you want to store into that data type instead of the default matlab data type. So the first one reads in 3 32 bit unsigned integers and stores them as integers. The second one reads in a single precision floating point number, but stores it as the default double precision.
You need to control the way that data is written in your c++ code, but that is inevitable. You can make a class method in c++ that packs the data in a deterministic way.
Dustin

Related

Linux command line tool to read binary C++ objects

I am writing many instances of a C++ object (below) to file in binary.
struct Example
{
double a, b;
char c;
Object d;
};
If I want to read this data from C++ I just reinterpret_cast<Example> and extract the members.
However, I'd like a non-programmer to be able to scroll and read the file, like the Linux program less. Unfortunately less doesn't understand the binary data of Example.
I know I could just have a C++ script which simply writes the binary file to an ASCII file but the data is large and cannot be re-written to disk.
What is the best way of achieving this?

openCL without external kernel file

I would like to create an OpenCL kernel without giving access to it to the end user.
Therefore, I can't use a regular external .cl text file. What are the alternatives, regarding that I would like to avoid creating a huge text string with the kernel?
And yet another question, if I put this code in an hardcoded string, won't it be possible to access that code from some disassembler?
Here you have 2 scenarios:
If you are targeting one single device
If you are targeting any OpenCL device
In the first scenario, there is a possibility to embed the binary data into your executable (using a string). And load it when you run the program.
There would be no reverse engineering possible (unless the already known ones, like assembly), since the program will have compiled code and not the original code you wrote.
The way of doing that would be:
uchar binary_dev1[binarySize] = "..."
uchar * binary = &binary_dev1;
program = clCreateProgramWithBinary(context, 1, &device,
&binarySize,
(const unsigned char**)&binary,
&binaryStatus,
&errNum);
The second alternative involves protecting the source code in the kernel by some sort of "mangling".
Since the mangler code is going to be compiled, reverse engineer it could be complicated.
You can do any mangling you can think of that is reversible, and even combine them. Some ideas:
Compress the code it using a compression format, but hardcode some parameters of the decompresion, to make it less straightforward.
LZ4, ZLIB, etc...
Use an XOR operator on the code. Better if it varies over time, and better if it varies using a non-obvious rule.
For example:
char seq = 0x1A;
for(int i=0; i<len; i++){
out[i] = in[i] ^ seq;
seq = ((seq ^ i) * 78965213) >> 4 + ((seq * i) * 56987) << 4;
}
Encode it using encoding methods that require a key, and are reversible
Use a program that protects your program binary towards reverse engineering, like Themida.
Use SPIR 1.2 for OpenCL 1.2 or SPIR 2.0 for OpenCL 2.0 until SPIR-V for OpenCL 2.1 is available.

How can I read numbers from a file in C++?

My main question is about how you read data from a file that is not of the char data type.
I am writing a file of data from MATLAB as follows:
x=rand(1,60000);
fID=fopen('Data.txt','w');
fwrite(fID,x,'float');
fclose(fID);
Then when I try to read it in C++ using the following code "num" doesn't change.
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
fstream fin("Data.txt",ios::in | ios::binary);
if (!fin)
{
cout<<"\n Couldn't find file \n";
return 0;
}
float num=123;
float loopSize=100e3;
for(int i=0; i<loopSize; i++)
{
if(fin.eof())
break;
fin >> num;
cout<< num;
}
fin.close();
return 0;
}
I can read and write file in matlab fine, and I can read and write in c++, but I can't write in matlab and read in c++. The files I write in matlab are in the format I want, but the files in c++ seem to be writing/reading the numbers out at text. How do you read a series of floats in from a file in C++, or what am I doing wrong?
edit: The loop code is messy because I didn't want an infinite loop and the eof flag was never being set.
Formatted I/O using << and >> does indeed read and write numeric values as text.
Presumably, Matlab is writing the floating-point values in a binary format. If it uses the same format as C++ (most implementations of which use the standard IEEE binary format), then you could read the bytes using unformatted input, and reinterpret them as a floating-point value, along the lines of:
float f; // Might need to be "double", depending on format
fin.read(reinterpret_cast<char*>(&f), sizeof f);
If Matlab does not use a compatible format, then you'll need to find out what format it does use and write some code to convert it.
You need to read and write the same format. For that matter, what you have written from Matlab is an unformatted sequence of bytes which may or may not be able read depending on whether you use the same system. You can probably read this unformatted sequence of bytes into a C++ program (e.g. using std::istream::read()) but you shouldn't consider the data to be stored.
To actually store data, you need to be aware of the format the data has. The format can be binary or text but you should be clear about what the bytes mean, in which order they appear, how many there are or how to detect the end if a value, etc.
Using fwrite is not the best idea, because this will write out the data in an internal format, which might or might not be easy to read back in your program.
Matlab has other ways of writing output, e.g. functions like fprintf. Better write out your data this way, then it should be obvious how to read it back into another application.
Just use fprintf(fID, "%f\n", x), and then you should be able to use scanf to read this back in C/C++.

A good way to output array values from Python and then take them in through C++?

Due to annoying overflow problems with C++, I want to instead use Python to precompute some values. I have a function f(a,b) that will then spit out a value. I want to be able to output all the values I need based on ranges of a and b into a file, and then in C++ read that file and popular a vector or array or whatever's better.
What is a good format to output f(a,b) in?
What's the best way to read this back into C++?
Vector or multidim array?
You can use Python to write out a .h file that is compatible with C++ source syntax.
h_file.write('{')
for a in range(a_size):
h_file.write('{' + ','.join(str(f(a, b)) for b in range(b_size)) + '},\n')
h_file.write('}')
You will probably want to modify that code to throw some extra newlines in, and in fact I have such code that I can show later (don't have access to it now).
You can use Python to write out C++ source code that contains your data. E.g:
def f(a, b):
# Your function here, e.g:
return pow(a, b, 65537)
num_a_values = 50
num_b_values = 50
# Write source file
with open('data.cpp', 'wt') as cpp_file:
cpp_file.write('/* Automatically generated file, do not hand edit */\n\n')
cpp_file.write('#include "data.hpp"\n')
cpp_file.write('const int f_data[%d][%d] =\n'
% (num_a_values, num_b_values))
cpp_file.write('{\n')
for a in range(num_a_values):
values = [f(a, b) for b in range(num_b_values)]
cpp_file.write(' {' + ','.join(map(str, values)) + '},\n')
cpp_file.write('}\n')
# Write corresponding header file
with open('data.hpp', 'wt') as hpp_file:
hpp_file.write('/* Automatically generated file, do not hand edit */\n\n')
hpp_file.write('#ifndef DATA_HPP_INCLUDED\n')
hpp_file.write('#define DATA_HPP_INCLUDED\n')
hpp_file.write('#define NUM_A_VALUES %d\n' % num_a_values)
hpp_file.write('#define NUM_B_VALUES %d\n' % num_b_values)
hpp_file.write('extern const int f_data[%d][%d];\n'
% (num_a_values, num_b_values))
hpp_file.write('#endif\n')
You then compile the generated source code as part of your project. You can then use it by #including the header and accessing the f_data[] array directly.
This works really well for small to medium size data tables, e.g. icons. For larger data tables (millions of entries) some C compilers will fail, and you may find that the compile/link is unacceptably slow.
If your data is more complicated, you can use this same method to define structures.
[Based on Mark Ransom's answer, but with some style differences and more explanation].
If there is megabytes of data, then I would read the data in by memory mapping the data file, read-only. I would arrange things so I can use the data file directly, without having to read it all in at startup.
The reason for doing it this way is that you don't want to read megabytes of data at startup if you're only going to use some of the values. By using memory mapping, your OS will automatically read just the parts of the file that you need. And if you run low on RAM, your OS can reuse the memory allocated for that file without having to waste time writing it to the swap file.
If the output of your function is a single number, you probably just want an array of ints. You'll probably want a 2D array, e.g.:
#define DATA_SIZE (50 * 25)
typedef const int (*data_table_type)[50];
int fd = open("my_data_file.dat", O_RDONLY);
data_table_type data_table = (data_table_type)mmap(0, DATA_SIZE,
PROT_READ, MAP_SHARED, fd, 0);
printf("f(5, 11) = %d\n", data_table[5][11]);
For more info on memory mapped files, see Wikipedia, or the UNIX mmap() function, or the Windows CreateFileMapping() function.
If you need more complicated data structures, you can put C/C++ structures and arrays into the file. But you can't embed pointers or any C++ class that has a virtual anything.
Once you've decided on how you want to read the data, the next question is how to generate it. struct.pack() is very useful for this - it will allow you to convert Python values into a properly-formatted Python string, which you can then write to a file.

Transferring Matlab variables to C

I have a very large data structure in some Matlab code that is in the form of cells of arrays. We want to develop C code to work on this data, but I need some way to store the Matlab variable (which we generate in Matlab) and open it in a C/C++ program. What is the easiest way to bridge the two programs so I can transfer the data?
If you are only moving the data from MATLAB to C occassionally, the easiest thing would be to write it to a binary file, then read from the file in C. This of course leaves the C code completely independent of MATLAB.
This does not have to be that messy if your data structure is just a cell array of regular arrays, e.g.
a{1} = zeros(1,5);
a{2} = zeros(1,4);
You could just write a header for each cell, followed by the data to the file. In the above case, that would be:
[length{1} data{1} length{2} data{2}]
In the above case:
5 0 0 0 0 0 4 0 0 0 0
If the arrays are 2D, you can extend this by writing: row, column, then the data in row-major order for each cell.
This might not be entirely convenient, but it should be simple enough. You could also save it as a .mat file and read that, but I would not recommend that. It is much easier to put it in a binary format in MATLAB.
If you need to move the data more frequently than is convenient for a file, there are other options, but all I can think of are tied to MATLAB in some way.
You should use mex files:
http://www.mathworks.fr/support/tech-notes/1600/1605.html
If the two processes need to connect during their lifecycle, you have plenty of options:
Compile Matlab DLL.
Use Matlab Engine.
Compile MEX file (as #Oli mentioned earlier)
If the communication is offline (After Matlab closes, C++ starts to read), then you should use filesystem. Try to format it in XML, it is a well recognized standard.