I need to install QCheck/SML unit test library for ML.
I could git clone the code, and create the .cm file, but I'm not sure how to copy the generated file into where. The document simply says (http://contrapunctus.net/league/haques/qcheck/qcheck_2.html):
2.1 SML/NJ
For Standard ML of New Jersey, the CM library specification ‘qcheck.cm’ should be all you need. The default target of make -f
Makefile.nj will ask CM to build and stabilize this library. This
creates a file ‘.cm/x86-unix/qcheck.cm’ (alter the arch/os tag as
needed) which may be copied into the standard CM library path and
added to the ‘pathconfig’.
I used brew install smlnj for the ML installation in Mac, so I have SMLNJ_HOME at /usr/local/Cellar/smlnj/100.78/SMLNJ_HOME.
What is the CM path library in this? In general, how to install a library into SML/NJ?
Edit
From Matt's answer, this is how I made it work.
Setup
Copy the whole qcheck directory into /usr/local/Cellar/smlnj/110.78/SMLNJ_HOME/lib.
Make ~/.smlnj-pathconfig file.
Add qcheck.cm /usr/local/Cellar/smlnj/110.78/SMLNJ_HOME/lib/qcheck in the file.
Usage (in REPL)
CM.make "$/qcheck.cm";
open QCheck;
Things to consider.
I couldn't use the stabilized libraries (qcheck/.cm/x86-unix/qcheck.cm). So, I had to copy the whole directory.
For user's library, I think the install location can be anywhere, as the ~/.smlnj-pathconfig can point to the directory.
For importing a structure in the same directory, use "FILENAME"; is needed instead of CM.make.
The CM library path is located in SMLNJ_HOME/lib. You can place the .cm file here. The instructions say to modify the pathconfig file, however, I would suggest creating a .smlnj-pathconfig file in your home directory instead. You are going to want to then paste the following line into that file:
qcheck.cm <path to directory containing qcheck.cm file>
You can then reference this in one of your .cm files using the anchor name: $/qcheck.cm. I've not used stabilized libraries before, and the generated .cm file is giving me a bunch of errors. If you instead use the qcheck.cm file from the root directory of the qcheck repo, it seems to work for me. Perhaps someone else can comment on why I am getting these errors.
Related
In an iOS c++/Qt application, I need to ship a few files and to keep them in their directory structure.
For the Android version, we bundle a zip which we unzip on the target before creating the QApplication.
On iOS, it seems that CMake is not capable of bundling files in a tree:
https://cmake.org/cmake/help/latest/prop_tgt/RESOURCE.html#prop_tgt:RESOURCE
https://cmake.org/cmake/help/latest/prop_sf/MACOSX_PACKAGE_LOCATION.html
I am not sure if this is a limitation of cmake or if this is a global limitation on iOS.
From the docs about iOS bundles:
It uses a relatively flat structure with few extraneous directories in an effort to save disk space and simplify access to the files.
What would be the preferred approach?
Is there a solution to ship the files from CMake directly?
If not, how can I achieve this so that they are available before the QApplication is created?
The xcode command
Thanks to #Cy-4AH, I added the folder in Xcode and could get the command to do this:
CpResource _PATH_TO_DIRECTORY_ _APP_BUNDLE_DIRECTORY_/_RESOURCE_DIR_NAME_
cd /Users/denis/opt/qfield/ios/QField
export PATH="....."
builtin-copy -exclude .DS_Store -exclude CVS -exclude .svn -exclude .git -exclude .hg -strip-debug-symbols -strip-tool /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/strip -resolve-src-symlinks _PATH_TO_DIRECTORY_ _APP_BUNDLE_DIRECTORY_
But how can I create this from cmake? builtin-copyis an xcode command.
Simple system copy command
From an old (2008) discussion, we could use simple cp commands.
This works up to signing, but then I get an error unsealed contents present in the bundle root.
From this answer, it seems related that I cannot simply add folders in the resource directory. From the docs anatomy of framework bundles: Nonlocalized resources reside at the top level of the Resources directory
(Disclaimer: I'm not a CMake user, and there may be a more CMake-ey way to do this)
If you can set up post-build action, the following terminal script can efficiently sync files into your bundle from another location. I use it in my game engine because it only copies updated or new files upon subsequent builds, and preserves directory structure:
mkdir -p PATHTO/ORIGINFOLDERNAME
mkdir -p PATHTOBUILDFOLDER/PROJECTNAME.app/Contents/Resources/DESTINATIONFOLDERNAME
rsync -avu --delete --exclude=".*" PATHTO/ORIGINFOLDERNAME/ PATHTOBUILDFOLDER/PROJECTNAME.app/Contents/Resources/DESTINATIONFOLDERNAME
The mkdir commands are only to ensure that the folders are generated, if they were deleted.
So apparently the CMake method also works for directories.
target_sources(${QT_IOS_TARGET} PRIVATE ${_resource})
set_source_files_properties(${_resource} PROPERTIES MACOSX_PACKAGE_LOCATION Resources)
It will just be added at the root directory of the bundle and not within the Resources.
If the embedded file is not too big, you might consider :
in your source tree, generating a C++ file embedding that file as a constant array. For example, if your file contains just hello, world with a new line, you could have something like
/// file contents.cc
const char file_contents[] = "hello, world\n";
and at the beginning of your program (perhaps in your main function, before your QApplication) call a C++ function which writes such a file (perhaps in /tmp/).
in your build automation (e.g your Makefile or your qmake things), have something which generates the C++ contents.cc file from the genuine source
This is with a POSIX/Linux point of view, adapt my answer to your iOS.
I'm using Conan as a dependency manager for C++ and I want to create a package which requires a compiled file from another, already created, Conan-package.
I'm currently trying to create a package for the OpenStreetMap-Library OSM-binary (https://github.com/scrosby/OSM-binary.git).
The Makefile for this project (which can be found at ./OSM-binary/src/Makefile) requires a file called protoc from the protobuf-project (https://github.com/google/protobuf). This protoc-file can be found after compiling the protobuf-project in ./protobuf/src.
Without this file compiling the OSM-sources will fail with an error: make: ../protoc: Command not found
The Problem
As conan's documentation suggests to copy my needed files to folders in my package, e.g header-files to ./include, libs to ./lib, etc.
According to this, after building the protobuf-project via make, I'm copying the mentioned file via
def package(self):
self.copy("*.so", dst="lib", keep_path=False)
self.copy("protoc", dst="scripts", src="./protobuf/src")
to an folder called "scripts".
But at this point the black magic starts.
My first question is, how can I access any of these packed files (e. g. the *.so files or any other files (here the protoc-file) which are present in an package) from another package?
For me, even after reading conan's documentation, it's not clear how conan stores it's packed files and how to access these or any other files packed in the previous step.
Now back at the OSM-Project my approach would be setting the correct path manually in the Makefile via the tools.replace command.
First I declared the protobuf-packaged as an requirement
requires = "protobuf/2.5.0#test/testing"
and replaced the corresponding lines (in version 1.3.3, line 7) in the osm-Makefile with the correct path to the protoc-file.
tools.replace_in_file("OSM-binary/src/Makefile",
"PROTOC ?= protoc",
"PROTOC ?= <path-to-file>/protoc")
Now this leads us to my actual question: How do I get the path to the protoc-file which can be found in the protobuf-package in a folder called scripts, or is there any other way to do it?
Thanks,
Chris
There are different ways to access files from your dependencies:
If you want to directly run some file from your dependencies, you could use the self.run(...., run_environment=True), that will automatically set the PATH, LD_LIBRARY_PATH, etc so the binaries are found in the place where the package is installed. Find more information here
You can directly import the files you want from your dependencies, doing a copy (which is done before the build() method) of such files into the build folder, so they can be directly used there. The path you can use in your script is the current one, or self.build_folder. The imported files will be automatically removed after build, so they are not accidentally repackaged. Check imports docs
You can obtain information from your dependencies from the self.deps_cpp_info attribute. Check the reference here. That means you can get the paths to your protobuf dependency with something like
def build(self):
# Get the directory where protobuf package is installed
protoc_root = self.deps_cpp_info["protobuf"].rootpath
# Note this is a list
protoc_bin_paths = self.deps_cpp_info["protobuf"].bin_paths
I'm trying to use tensorflow as a external library in my C++ application (mainly following this tutorial). What I done so far:
I have cloned the tensorflow reporitory (let's say, that the repo root dir is $TENSORFLOW)
Run /.configure (which all settings default, so no CUDA, no OpenCL etc.).
Build shared library with bazel build -c /opt //tensorflow:libtensorflow_cc.so (build completed successfully)
Now I'm trying to #include "tensorflow/core/public/session.h". But after including it (and adding $TENSORFLOW and $TENSORFLOW/bazel-genfiles to include path), I'm receiving error:
$TENSORFLOW/tensorflow/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42:
fatal error: unsupported/Eigen/CXX11/Tensor: No such file or directory
There is a github issue created for similar problem, but it's marked as closed without any solution provided. Also I tried with master branch as well as v.1.4.0 release.
Do you happen to know, what could cause this kind of problem and how to deal with it?
I (and many others) agonized over the same problem. It probably can be solved using bazel but I don't know that tool well enough and now I solve this using make. The source of confusion is that a file named Tensor is included and it itself includes a file named Tensor, which has caused some people to wrongly conclude Tensor is including itself.
If you built and installed the python .whl file there will be a tensorflow directory in dist-packages and an include directory below that, e.g. on my system:
/usr/local/lib/python2.7/dist-packages/tensorflow/include
From the include directory
find . -type f -name 'Tensor' -print
./third_party/eigen3/unsupported/Eigen/CXX11/Tensor
./external/eigen_archive/unsupported/Eigen/CXX11/Tensor
The first one has
#include "unsupported/Eigen/CXX11/Tensor"
and the file that should satisfy this is the second one.
So to compile session.cc that includes session.h, the following will work
INC_TENS1=/usr/local/lib/python2.7/dist-packages/tensorflow/include/
INC_TENS2=${INC_TENS1}external/eigen_archive/
gcc -c -std=c++11 -I $INC_TENS1 -I $INC_TENS2 session.cc
I've seen claims that you must build apps from the tensorflow tree and you must use bazel. However, I believe all the header files you need are in dist-packages/tensorflow/include and at least for starters you can construct makefile or cmake projects.
Slightly off-topic, but I had the same error with a C++ project using opencv-4.5.5 and compiled with Visual Studio (no problem with opencv-4.3.0, and no problem with MinGW).
To make it work, I had to add to my root CMakeLists.txt:
add_definitions(-DOPENCV_DISABLE_EIGEN_TENSOR_SUPPORT)
If that can help someone...
the problem was actually in the relative path of the header file taken in the Tensor file.
installed path for Tensor is /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor
but mentioned in the Tensor file is "unsupported/Eigen/CXX11/Tensor"
So there should be an entry upto /usr/include/eigen3/ in the project path to run this correctly so that it can be used.
I've got a problem with compiling my basic code with MOOS-ivp.
I made main.cpp, simpleApp.cpp and simpleApp.h from documentation, where should I put it to build it with moos? In docs there is note about launching MOOSDB and uMS - ok, but there is not any option to find path to my .cpp file. Is there any default path? Maybe should I compile it firstly with gcc?
I'll assume you know some basic information about MOOS or are taking the MIT 2.680 course and know some of the terminology talked about in the introduction lab.
The recommended way to build external MOOS is to have moos-ivp and moos-ivp-extend in a directory next to each other. You should run GenMOOSApp_AppCasting in the moos-ivp-extend/src directory and add your new project to the CMakeLists.txt file in the same directory. Then, use the included ./build.sh script to build your executables, and add the directory it makes to your $PATH.
Finally, you should be able to run your mission with your new MOOS app.
I am developing an app on Netbeans, while I can run it. I can not debug or reun the test files. When I try to do so, I get:
./build/Debug/GNU-Linux-x86/tests/TestFiles/f1: error while loading shared libraries: libboost_thread.so.1.49.0: cannot open shared object file: No such file or directory
It tried including the library or the specific file with the debugging or testing session, but I continue to get that. Could there be an inconsistency with Netbeans?
Any ideas would be greatly appreciated!
I assume your OS is Linux. It follows from your email that you have access to the copy of the libboost_thread.so.1.49.0 file. Let DIR be directory where this library exists.
If you do not have superuser on this computer, use method A. If you have superuser, use method A or method B.
Method A. Good for non-superuser or for superuser.
Let DIR be directory in which library libboost_thread.so.1.49.0 exists.
I assume you can start NetBeans from shell command line, not from GUI icon.
Quit NetBeans. Execute following command in bash:
export LD_LIBRARY_PATH=DIR:$LD_LIBRARY_PATH
start netbeans from command line
Eventually, you will want to put the export command into your ~/.bashrc file.
Method B. Good only for superuser.
If you have superuser, use one the following methods to place the missing library into /usr/lib or /lib:
(1) install boost from rpm or apt or whatever packaging your linux system has, or
(2) install boost from sources with --prefix=/usr, or
(3) copy the mentioned library to /usr/lib. If you have to use #3, be careful about symlinks. Copy using "cp -a" and copy all files beginning libboost_thread.so*, like
cp -a DIR/libboost_thread.so* /usr/lib