I have some code which I need to code in C++ due to heavy reliance on templates. I want to call this code from MATLAB: basically, I need to pass some parameters to the C++ code, and have the C++ code return a matrix to MATLAB. I have heard this is possible with something called a MEX file which I am still looking into. However I am not sure what is supported in these MEX files. Is all of C++ (e.g. STL and Boost) supported? How difficult is it?
EDIT: I don't need any shared libraries, just header-only stuff like shared_ptr.
Have a look at the MEX-files Guide, especially Section 25–27 for C++.
The basic STL/Boost data structures should work, but threading with Boost could be a problem.
cout will not work as expected in C++, mexPrintf has to be used instead.
It's certainly possible to write C++ MEX files which use STL and boost. In general, you should be able to do anything you please inside a C++ MEX file. The main practical restriction is that MATLAB already ships with a bunch of libraries, so if you're using one of the boost pieces that needs a shared library (some are header-only), you'll need to match the version you compile against with that shipping with MATLAB.
For instance, MATLAB R2009b ships with boost 1.36 (you can tell by looking at the names of the libraries in <matlabroot>/bin/<arch>).
The C++ files are actually compiled by an external compiler. Use mex -setup to select which one (here is a list of supported compilers). Therefore, you shouldn't have too many weird things happen, nor should you be too restricted by what you can do.
I did some MEX stuff last year, and my memory is a bit rusty, but you do need to construct MATLAB arrays using MEX functions. I found the MATLAB documentation adequate, and the whole experience not too painful.
STL is definitely supported. Boost probably yet. The point is as long you have your STL and BOOST deployed on your computer, you should be good to go.
Related
I wrote a TCPIP-Socket-Connection with Server and Client in C++, which works quite nice in VisualStudio. Now I want to use the C++ - Client in MATLAB/Simulink through MEX-Files and later in a S-Function.
I found two descriptions about MEX-Files.
C++ MEX File Application Just for C++
C/C++ MEX Files C/C++
Now I am confused, which one would be the one to take. I wrote some easy programms with the second, but always got into problems with datatypes. I think, it is because the given examples and functions are only for C and not for C++.
I appreciate any help! Thank you very much!
The differences:
The C interface described in the second link is much, much older (I used this interface way back in 1998). You can create a MEX-file with this interface and have it run on a large set of different versions of MATLAB. You can use it from C as well as C++ code.
The C++-only interface described in the first link is new in MATLAB R2018a (the C++ classes to manipulate MATLAB arrays were introduced in R2017b, but the ability to write a MEX-file was new in R2018a). MEX-files you write with this interface will not run on prior versions of MATLAB.
Additionally, this interface (finally!) allows for creating shared-data copies, in-place operations, etc. (the stuff we have been asking for for many years, but they didn't want to put into the old C interface because they worried it would be too complex for the average MEX-file writer).
Another change to be aware of:
In R2018a, MATLAB also changed the way that complex arrays are stored in memory. In older versions of MATLAB, the real and imaginary components are stored in separate memory blocks. In R2018a and on, they are stored in the same memory block, in the same fashion as you would likely use in your own code.
This affects MEX-files! If you MEX-file uses complex data, it needs to read and write them in the way that MATLAB stores them. If you run a MEX-file compiled for an older version of MATLAB, or compile a MEX-file using the current default building options in R2018a, a complex array will be copied to the old storage model before being passed to the MEX-file. A new compile option to the mex command, -R2018a, creates MEX-files that pass the data in the new storage model unchanged. But those MEX-files will not be compatible with previous versions of MATLAB.
How to choose?
If you need your MEX-files to run on versions of MATLAB prior to the newest R2018a, use the old C interface, you don't have a choice.
If you want to program in C, use the old C interface.
If you need to use complex data, and don't want to incur the cost of the copy, you need to target R2018a and newer, and R2017b and older, separately. You need to write separate code for these two "platforms". The older versions can only be targeted with the C interface. For the newer versions you can use either interface.
If you appreciate the advantages of modern C++ and would like to take advantage of them, and are targeting only the latest and greatest MATLAB version, then use the new C++ interface. I haven't tried it out yet, but from the documentation it looks to be very well designed.
I am new to C++, with some training using fortran95. Trying to convert my knowledge into the new syntax but have run into a snag.
Many of my programs use modules with subroutines, and subroutines within subroutines and use of functions from a library described by NAG.com which are readily available and searchable.
I am currently looking for a c++ version of
http://www.nag.com/numeric/FL/manual/pdf/F08/f08naf.pdf
From what I have read so far, these libraries exist for c++ and I have used some simple ones thus far(like vector, cmath, math.h) but only ones that are already included in my Xcode package for my mac.
I haven't seen anybody mention one of these which my be included with my Xcode, and I am lacking in how to implement outside libraries I find. I am interested particularly in using:
http://www.alglib.net/download.php
Thus far I have been using subrutines as void type functions and simply including them in all of my code. But my code is becoming exceedingly cumbersome and I would like to make something similar to a fortran module to do chebyshev calculations. And I would much rather find a good library of eignevalue calculators and maybe even chebyshev calculators, . . . which I can use.
Essentially my question is, how do I implement external libraries I find and does anybody have a recommendation for a good one? How can I make my own code which contains a callable set of functions and then call it from within another piece of code?
If I understand right, the part of the NAG library you're using in Fortran is basically LAPACK. There is a C interface to LAPACK, called LAPACKE (http://www.netlib.org/lapack/lapacke.html). You can use it in a C++ program.
I didn't understand the other parts of your question.
I have developed a MATLAB program with Visual C++. I am using Intel® Integrated Performance Primitives because speed of program is important issue and I have done a lot of efforts for implementing some MATLAB functions. For example, for Min and Max functions over a vector i use ippsMaxIndx_32f; but, there is function in MATLAB like Find.
Here is a description of the Find method in MATLAB:
Description
I need a function which implements this find function of MATLAB with high speed.
Are there any functions inside Intel Ipp, that works like the Find function in MATLAB?
I've never heard of a comprehensive port of matlab functionality to C++. That being said, almost everything matlab does exists within a C/C++ library somewhere, some off the top of my head:
LAPACK, BLAS, and there are a few good implementations, the most notable (free) one being ATLAS.
FFT is implemented in matlab via the fftw library
There are loads of fast open-source image libraries out there, ie. interpolation, filtering.
There are really good OOP matrix libraries out there, boost has a nice one.\
After that, well figure out what you need and there is a good chance someone has implemented it in C/C++.
You can check to these to see if you can find the function you are looking for! As i am not sure for that.
I've used ippsFind_* and they have worked just fine.
I would like to know how one goes about adding functionality from one open source c++ library to another. To make things concrete, here is an example. I really like the "find" function in the Armadillo library and now that i find myself using eigen more
i kinda miss it. How hard would it be to write an equivalent of "find" that would be fully integrated into eigen (i.e. using eigen objects etc...)? How does one go about doing this? Where can i find the source code of the "find" function?
thanks in advance,
You will have to write it on your own, taking into consideration differences between libraries. It may require some knowledge about the library that you are trying to extend, though.
Start from reading the code of armadillo, to understand what they do in this function. Then proceed to understand how analogous structures are implemented in eigen and modify the code. If you want to integrate it into eigen, so that you need to link against only one library (only your custom eigen, not standard eigen and your custom eigen extensions), you will need to compile eigen with your files added to a Makefile/Cmake (or whatever eigen is using).
You can find sources of armadillo in tar.gz archive here: http://arma.sourceforge.net/download.html
If you ask where is find operator in armadillo sources, check include/armadillo_bits/op_find_bones.hpp and include/armadillo_bits/op_find_meat.hpp
I am implementing a sort and stream compaction algorithms in CUDA C. However I have just figured that it is not that simple to implement those algorithms by myself with good performance. Given that I am working with matrices I cannot use CUDPP, so, although I was avoiding it, I will have to work with thrust library(I know nothing of C++).
I have been programing in C, and I really just want to use C++ to work with thrust, so basically I want to know if I can have most of my code in C and then have little bits of C++ code(I am guessing I will have to use the "external" function) but I wanted to be sure if it's feasible in CUDA.
Thanks in advance.
On the host code side, thrust is simple to integrate. Even though you might think that your host side code in any .cu file you compile is C, it is compiled using a C++ compiler anyway (most of CUDA internally relies on C++ features to compile). So you are actually working in C++ now without realizing it.
Yes, might complicate your build process but otherwise works fine. We use it all the time to wrap up some CUDA functions into C++ class which (and this is the REAL kicker) are then wrapped up with JNI for use in Java. If we can do it, you can do it! Have at it!