all
I have test.m( matlab source code) file which implement A() function; and main.cpp file (will call A() ).
as you know ,we may do like the following steps:
use matlab to compile test.m (mcc -) ,will generate : test.dll, test.ctf,test.h .
copy the test.dll and test.ctf ,test.h file to VS2005 project. in main.cpp, call A() in test.dll.
But,when i release the programe,i will also pack the test.dll together.
And another way, can i using the VS2005 to compile both test.m and main.cpp which will only generate main.dll,main.ctf,main.h..( i will only release main.dll,main.ctf,main.h,).
this means, i compile the test.m into main.cpp.
And i have tried this way, in VS2005 ---> Build Events--> pre-Build Event-->command line: mcc C -w lib:test test.m
and it will generate the mid-file test.ctf(only test.ctf,no test.dll).But i don't know how to compile test.ctf into main.cpp ?
could anyone help me ?
thanks.
You can do it the other way around and add your main.cpp to the matlab build process: I don't know the exact syntax, but you can add your main.cpp to mcc/mbuild, and it will add it to the dll for you. When using deploytool in gui mode, just drag c/c++ files to the resources area and they get compiled into the dll. So you'll have one dll only containing both m code and your own c++ code.
Another option, using the above strategy: first try the above, and look at the output of deploytool: it will show you the commands used. First it invokes mcc, then mbuild which in turn calls cl (the MS compiler). Use the exact command used to invoke mcc as a pre-build event, and then add that output files to cl in the same way mbuild does it (you can also see in the output how it does that). This way you can use VS to build a single dll anyway, just mimick what the matlab build process does.
Still I'm not sure how this is beneficial over distributing the two seperately. Also don't forget you have to distribute the entire MCR with it else your clients won't be able to run any code using the dll.
Related
What I need: compile .aidl file to c++ code
What I found: Generating C++ Binder Interfaces with aidl-cpp
AIDL compiler for C++ on Linux Desktop
3.What I did:I had clone the "aidl-cpp" on the second link,after viewing the many files in the project,I was missing.
Some doubt here:
Can I compile this project and use the target to compile my
project?If yes,I need the whole android platformtsource code in my
system path?Because I try to compile one single file main_java.cpp
with g++ commandline , and it returns some "android-base" libs
cannot find.
Or I can write my own cpp file to implement the interfaces defined
in my .aidl files imitate the "aidl-cpp" project.This way I haven't
tried.
What's the right way to meet my need?
You can just use aidl of android sdk build-tools after 29.0.0.
aidl --lang=ndk <input.aidl>
I am converting a MATLAB written function into C by "Matlab coder". After I get the converted files , the converted function always have first input argument as const emlrtStack *sp. Now when I am trying to test it on VC++ 2013, IntelliSense is giving mentioned above error.
I manually tried to locate this identifier in emlrt.h file but no such thing is present there. I tried to convert a simple multiply function with two input arguments[like, c=mul(a,b)] but still the converted function has this extra argument inside the function in addition to a and b.
(which means this argument is not function specific).
If someone has a solution to this or have experienced a problem like this, please share or help.
Moreover If someone know how to simply test these converted functions, it would be a much appreciated additional help .
It is likely that the code that was generated for a MEX function rather than a standalone target. MEX functions are binaries written C, C++ or Fortran that can be called like a normal MATLAB function. Generating code to produce a MEX function allows two things. First, you can test your generated code in MATLAB because you can call the MEX function from MATLAB like any other function. Look for a file named mul_mex.mex* after you do code generation and try to call it: mul_mex(1,2). The other use for generating a MEX function is that it can often be faster than the MATLAB code from which it was generated. MEX functions are only used in the context of MATLAB.
The parameter emlrtStack* that you saw appears in MEX generated code to aid in runtime error reporting. It is not present in standalone code that is designed to be run outside of MATLAB.
If you want to use the generated code in Visual Studio, or outside of MATLAB you should choose one of the standalone targets, LIB, DLL, or EXE. This page shows how to change the output type. To summarize, if using the command line you could say:
cfg = coder.config('lib'); %or 'dll' or 'exe'
codegen mul -config cfg -args {1,2}
If using the project interface, you click on the Build tab and choose static library or shared library in the "Output type" dropdown menu.
I would recommend reading this example that demonstrates how to use a generated DLL in Visual Studio.
I built Qt from source (dlls) and am trying to build an application that uses the Qt dlls. I don't have a lot of experience with C++ so I'm running into what I'm sure is a very basic issue.
My builds are failing on the includes with errors like so:
Fatal error: QNetworkProxy: No such file or directory
Here is the g++ command I am using (I also used -L to add the correct folder to the lib path, but that also didn't work):
g++ -l..\..\wkqt\bin\QtCore4.dll -l..\..\wkqt\bin\QtNetwork4.dll -l..\..\wkqt\bin\QtWebKit4.dll -I..\include -Ishared -Ipdf -Ilib -Iimage -o ..\bin\wkhtmltopdf.exe pdf\*.cc lib\*.cc image\*.cc shared\*.cc
I tried in Visual Studio as well (assuming it wouldn't build, but I wanted to see if I could at least include the Qt dlls from there properly) and I am getting the same errors. Am I doing something wrong with the way I am compiling with g++? If I am linking with the Dlls properly then what is the proper way to use Qt functions from my code?
To clarify, I am not looking for how to properly use Qt. My question is: what is the proper way to use functions defined in any Dll from native C++ code? I apologize if this is a very basic question, but I'm unable to find a clear answer on Google and I don't have any experience with C++ and including third party libraries for use from C++ code.
DLLs can be used by dynamicly loading them and calling their used functions.
to call the exposed functions first define their syntax in the begining
suppose function is syntax is
BOOL MyFunction(int a,char* pszString)
then define syntax
#typedef BOOL (WINAPI *PMYFUNCTION)(int a,char* pszString)
then make object
PMYFUNCTION pfnMyFunction;
and get valid pointer by calling GetProcaddress after loadlibrarycall
HMODULE hlib= Loadlibrary("c:\\Mylib.dll");
if(hlib)
{ pfnMyFunction = (PMYFUNCTION)Getprocaddress(hlib,"MyFunction"); }
Hope this helps...
I want to use LEAP Motion in D.
Therefore It doesn't have C library and It has only C++ library.
I tried SWIG 2.0.9 below command.
swig -c++ -d -d2 leap.i
This command output Leap.d, Leap_im.d, Leap_wrap.cxx, Leap_wrap.h.
However, I don't know how to use to wrapper in D and I can't find how to use the wrapper.
Link error displays to use it intact.
How use these wrapper in D2?
And can I use without Leap.cpp (source of Leap.dll)?
Update:
Thanks two answers. and sorry for reply late because of busy.
Say first conclusion I could build Leap sample code on Win64 by following the steps below.
Output wrappers by above command.
Create x64 DLL with VC2010 from Leap_wrap.cxx, Leap_wrap.h, and import Leap.lib(x64).
Compile Leap.d and Leap_im.d with dmd -c.
Build LeapTest.d with Leap.obj and Leap_im.obj
all command is below.
swig -c++ -d -d2 leap.i
dmd -c Leap.d Leap_im.d -m64
dmd LeapTest.d Leap.obj Leap_im.obj -m64
execute LeapTest.exe (require x64 Leap.dll and Leap_wrap.dll)
I could run Leap Program.
But program crach onFrame event callback.
I'll try again on x86 and investigate the causes.
Few helpful links (some information may be outdated):
http://klickverbot.at/blog/2010/11/announcing-d-support-in-swig/
http://www.swig.org/Doc2.0/D.html
http://www.swig.org/tutorial.html
I have never used SWIG personally but my guess based on general knowledge about SWIG:
Leap_wrap.cxx is C++ source file that wraps calls to C++ functions from target library in extern(C) calls
Leap_wrap.h is header file with all extern(C) wrappers listed
Leap_im.d is D module based on Leap_wrap.h with same extern(C) function listed
Leap.d is D module that uses Leap_im.d as an implementation and reproduces API similar to original C++ one.
So in your D code you want to import Leap.d module. Than compile Leap_wrap.cxx to an object file with your C++ compiler and provide D object files, Leap_wrap.o and target library at linking stage. That should do the trick.
P.S. Leap.cpp source should not be needed. All stuff links directly from Leap_wrap.cxx to target library binary.
Go to IRC, either FreeNode, or OFTC, channel #D. In order to help you, we have to see what is in those files. My first guess is that you have to compile both D files, and the C++ file into object files, and link them together. I suppose SWIG is going to flatten the C++ API into bunch of C functions, and that is probably what Leap_wrap.cxx does.
If the LEAP API is not complex (ie. just bunch of simple C++ classes), it may be possible to directly interface with it. Read more about it here: http://dlang.org/cpp_interface.html .
I am trying to integrate CUDA and openCV in a project. Problem is openCV won't compile when NVCC is used, while a normal c++ project compiles just fine. This seems odd to me, as I thought NVCC passed all host code to the c/c++ compiler, in this case the visual studio compiler.
The errors I get are?
c:\opencv2.0\include\opencv\cxoperations.hpp(1137): error: no operator "=" matches these operands
operand types are: const cv::Range = cv::Range
c:\opencv2.0\include\opencv\cxoperations.hpp(2469): error: more than one instance of overloaded function "std::abs" matches the argument list:
function "abs(long double)"
function "abs(float)"
function "abs(double)"
function "abs(long)"
function "abs(int)"
argument types are: (ptrdiff_t)
So my question is why the difference considering the same compiler (should be) is being used and secondly how I could remedy this.
In general I would recommend keeping separation between host code and CUDA code, only using nvcc for the kernels and host "wrappers". This is particularly easy with Visual Studio, create your project as normal (e.g. a Console application) and then implement your application in .cpp files. When you want to run a CUDA function, create the kernel and a wrapper in one or more .cu files. The Cuda.rules file provided with the SDK will automatically enable VS to compile the .cu files and link the result with the rest of the .cpp files.
NVCC passes C++ code through to the host compiler, but it has to first parse and understand the code. Unfortunately, NVCC has troubles with STL. If at all possible, separate code that makes use of STL into .cpp files and have those compiled with Visual Studio (without passing them first through NVCC).
Compile the .cu code as a library and then link it to the main program. I suggest to use cmake as it makes the process a breeze
There's a project hosted at cuda-grayscale that shows how to integrate OpenCV + CUDA together. If you download the sources check the Makefile:
g++ $(CFLAGS) -c main.cpp -o Debug/main.o
nvcc $(CUDAFLAGS) -c kernel_gpu.cu -o Debug/kernel_gpu.o
g++ $(LDFLAGS) Debug/main.o Debug/kernel_gpu.o -o Debug/grayscale
It's a very simple project that demonstrates how to separate regular C++ code (OpenCV and stuff) from the CUDA code and compile them.
So there's no easy way to use nvcc to compile your current C++ code, you have to write wrappers and compile them using nvcc, while you compile the rest of your code using g++ or what-have-you?