My situation: I want to use the SNOPT-Solver in Python through pyomo's SolverFactory. I applied for getting the C/C++ libraries for the optimization solver SNOPT and got
libsnopt7.dylib
libsnopt7_cpp.dylib.
After managing to put together a dummy executable which SolverFactory can call, it gives me following error message:
IOError: [Errno 2] No such file or directory:
'/var/folders/_d/vnct15hn3.9j8dhgqr6gjf3rw0000gn/T/tmpoSB0fh.pyomo.sol'
Nevertheless, there is a file with that name, only with a .nl suffix.
Does anyone know why this problem appears and how to solve it?
Thanks a lot.
Pyomo does not have a specialized or library-mode binding to SNOPT. To use SNOPT from Pyomo, you will need a compiled executable called "snopt" that has been built against the ASL (AMPL Solver Library interface). The ASL provides the interface that can read the .nl input file that Pyomo generates and will produce the .sol solution file that Pyomo expects.
You can get the source for the AMPL Solver Library interface, along with the wrapper for SNOPT through Netlib: http://www.netlib.org/ampl/solvers/.
Related
I'm trying to use tensorflow as a external library in my C++ application (mainly following this tutorial). What I done so far:
I have cloned the tensorflow reporitory (let's say, that the repo root dir is $TENSORFLOW)
Run /.configure (which all settings default, so no CUDA, no OpenCL etc.).
Build shared library with bazel build -c /opt //tensorflow:libtensorflow_cc.so (build completed successfully)
Now I'm trying to #include "tensorflow/core/public/session.h". But after including it (and adding $TENSORFLOW and $TENSORFLOW/bazel-genfiles to include path), I'm receiving error:
$TENSORFLOW/tensorflow/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42:
fatal error: unsupported/Eigen/CXX11/Tensor: No such file or directory
There is a github issue created for similar problem, but it's marked as closed without any solution provided. Also I tried with master branch as well as v.1.4.0 release.
Do you happen to know, what could cause this kind of problem and how to deal with it?
I (and many others) agonized over the same problem. It probably can be solved using bazel but I don't know that tool well enough and now I solve this using make. The source of confusion is that a file named Tensor is included and it itself includes a file named Tensor, which has caused some people to wrongly conclude Tensor is including itself.
If you built and installed the python .whl file there will be a tensorflow directory in dist-packages and an include directory below that, e.g. on my system:
/usr/local/lib/python2.7/dist-packages/tensorflow/include
From the include directory
find . -type f -name 'Tensor' -print
./third_party/eigen3/unsupported/Eigen/CXX11/Tensor
./external/eigen_archive/unsupported/Eigen/CXX11/Tensor
The first one has
#include "unsupported/Eigen/CXX11/Tensor"
and the file that should satisfy this is the second one.
So to compile session.cc that includes session.h, the following will work
INC_TENS1=/usr/local/lib/python2.7/dist-packages/tensorflow/include/
INC_TENS2=${INC_TENS1}external/eigen_archive/
gcc -c -std=c++11 -I $INC_TENS1 -I $INC_TENS2 session.cc
I've seen claims that you must build apps from the tensorflow tree and you must use bazel. However, I believe all the header files you need are in dist-packages/tensorflow/include and at least for starters you can construct makefile or cmake projects.
Slightly off-topic, but I had the same error with a C++ project using opencv-4.5.5 and compiled with Visual Studio (no problem with opencv-4.3.0, and no problem with MinGW).
To make it work, I had to add to my root CMakeLists.txt:
add_definitions(-DOPENCV_DISABLE_EIGEN_TENSOR_SUPPORT)
If that can help someone...
the problem was actually in the relative path of the header file taken in the Tensor file.
installed path for Tensor is /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor
but mentioned in the Tensor file is "unsupported/Eigen/CXX11/Tensor"
So there should be an entry upto /usr/include/eigen3/ in the project path to run this correctly so that it can be used.
Arjuncomar states in the OpenCV-Raw readme.md at this link https://github.com/arjuncomar/opencv-raw/blob/master/README.md
"Compilation / Installation
Compiling this package might be a little tricky at the moment since I've only had the chance to test it on my machine so far. First, you need to generate C wrappers for the version of OpenCV on your machine -- this repo holds the wrappers for OpenCV 3.0 (HEAD) only. You can generate these wrappers (and the corresponding Haskell bindings) via:
./setup.sh <path to opencv headers>
e.g.
./setup.sh /usr/local/include"
I ran './setup.sh /usr/local/include', in the root directory of Opencv-Raw, btw I have OpenCV 2.4.7 installed on Ubuntu Saucy 64-bit and I get this error
Traceback (most recent call last):
File "cbits/genhsc.py", line 161, in <module>
cgen.gen(header_dir, headers, dstdir)
File "/home/w/Documents/opencv-raw-master/cbits/genc.py", line 367, in gen
self.readHeaders(header_dir, srcfiles)
File "/home/w/Documents/opencv-raw-master/cbits/genc.py", line 350, in readHeaders
decls = parser.parse(header_dir + hdr)
File "/home/w/Documents/opencv-raw-master/cbits/hdr_parser.py", line 732, in parse
f = open(hname, "rt")
IOError: [Errno 2] No such file or directory: '/usr/local/include/opencv2/core.hpp'
Arjuncomar states "this repo holds the wrappers for OpenCV 3.0 (HEAD) only." so I tried to find an OpenCV 3.0 download but no luck and I've never seen a core.hpp file in /usr/local/include so don't really understand error. I'm trying to incoroporate the autogenerated C wrappers for OpenCV's C++ interface arjuncomar wrote in his haskell bindings into my own OpenCV wrapper for a different language (minus the haskell part of course) and I felt this might be a good first step but if I can just make a make file for this code i/e
the cpp file
void cv_imshow(String* winname, Mat* mat) {
cv::imshow(*winname, *mat);
}
the hpp file
void cv_imshow(String* winname, Mat* mat);
and expect it to be a perfect C wrapper for C++ OpenCV code pls let me know...and if posible a link regarding how to make such a make file posted here would aid greatly....I'm used to C but new to C++/ C++ MakeFiles and would rather do this perfect on my first try so I can output volume more quickly without worrying about making an error...
....Any help is appreciated...A good day=) to you All...
I'm the library author. The library has been moved a couple of times after requests from the OpenCV folks. It's currently sitting in my fork of the opencv_contrib repo. Follow the instructions in the readme to build and install the wrappers.
The procedure amounts to setting up an opencv build directory, having cmake populate it by telling it where the opencv source tree is located and that it needs to load the extra modules from opencv_contrib.
cd <cmake build directory>
cmake -DOPENCV_EXTRA_MODULES_PATH=<opencv_contrib>/modules <opencv_source_directory>
Compiling and installing the library will install the C wrapper headers to "/include/opencv2/c/" and the compiled binary to "/lib/libopenc_c.so". If cabal and ghc are present on the system, it will also compile and install the Haskell bindings. For me, this is as simple as:
make -j6 && sudo make install
Building in this manner should avoid the issue listed in the OP because the headers are pulled by cmake from the source tree and passed directly to the header parser and wrapper generator. Please send bug reports to either the opencv_raw repo or to opencv_contrib. I'm watching both repos and I'm always happy to take pull requests.
Edward -- I understand you're trying to get in touch with me. You can reach me at nrujac at gmail dot com or directly on github by opening an issue on any of repos.
I found out just buld Arjun Comars fork here https://github.com/arjuncomar/opencv and the bindings will be autogenerated in the opencv_generated cpp and hpp files
I was wondering if there is an equivalent in GLPK Api of the command line function "glpsol".
Because I have a model which is written in a .mod file and data in .dat file.
In command line I can solve it by calling this line :
glpsol --model flow-glpk.mod --data your_data_set.dat
I would like to solve the same problem in a C/C++ program without execute an "execv()" call.
Have a look at mplsamp2.c in the examples directory of the source distribution, I believe it does what you want, you just have to change the hardcoded names appropriately to your application.
GLPK comes with a nice manual, 3.2 Routines for processing MathProg models details how to deal with MathProg models using the C API.
I am using Inet, and I did some modifications for my work. I created a new class of addresses (I do not have any error). When I want to use this class in my files .cc
I get this error:
<!> Warning: opp_run: Cannot check library ../../src/inet: ../../src//libinet.so: undefined symbol: _ZNK12MYAddress3strEv
<!> Error during startup: Cannot load library '../../src//libinet.so': ../../src//libinet.so: undefined symbol: _ZNK12MYGAddress3strEv.
OMNeT++ Discrete Event Simulation (C) 1992-2011 Andras Varga, OpenSim Ltd.
Version: 4.2.2, build: 120327-7947143, edition: Academic Public License -- NOT FOR COMMERCIAL USE
See the license for distribution terms and warranty disclaimer
End.
Simulation terminated with exit code: 1 (I don't know what it means)
I used the debbug and I get:
.gdbinit: no such file or directory
But I don't know what it means as well.
So, I would appreciate it if you could help me. I have no idea to solve my problem.
You can safely ignore the missing .gdbinit warning. It has nothing to do with your problem.
The base cause is most likely that the new Address class is not included in the created shared library. Be sure to recreate your makefile and check if you are using .cc as an extension and that the file is under the src directory.
I am very new to R. I would like to build an R package which will call a C++ function using .Call().
I have a NAMESPACE file, with
useDynLib(mypkg)
where mypkg is also the function name of my c++ code.
It works if I use this line at the begining of the mypkg.R file:
dyn.load("src/mypkg.so")
but I want to use library.dynam instead, so in the zzz.R file, I put
.onLoad<-function(libname, pkgname)
{
library.dynam("mypkg", pkgname, libname)
}
It gives the error when checking the package:
...
Error in .Call("mypkg", PACKAGE = "mypkg") :
C symbol name "mypkg" not in DLL for package "mypkg".
Error : unable to load R code in package 'mypkg'
...
It looks like the *.so file is generated in the wrong place? Why there is not /libs folder generated?
I would like to build the package to be os independent, is there a way to do it with dyn.load?
And this may be a very silly question, where did pkgname and libname get their input from?
Thank you very much for your help.
You could look at one of the many existing packages (with compiled source code) on CRAN.
Smaller and simpler is easier to grok, so you could e.g. look at digest which uses a NAMESPACE to load the one shared library built from the handful of C source files, and uses .Call() to access the main entry point.
And then there is of course the manual...