I wanted to compile my project using command:
ocamlfind ocamlopt -package ocamlnet -package batteries -package unix -linkpkg oauth.ml
but I'm getting following error:
ocamlfind: Package `ocamlnet' not found
make: *** [oauth.cmi] Error 2
After some research on this problem I have read that there may be problem with packages installed via opam and packages installed before opam installation (in this case with ocamlfind) so I tried to check that and get stuck because ocamlfind is installed via opam. Does anybody know what may I try to do to solve that problem?
$ which ocamlfind
/home/user/.opam/4.00.1/bin/ocamlfind
$ opam list
Installed packages for 4.00.1:
[...]
ocamlfind 1.4.0 A library manager for OCaml
[...]
Thanks in advance.
Do a:
eval $(opam config env)
That should fix the problem.
# Edit 1:
If it still does not work remove the dir
/home/user/.opam/4.00.1
and try it again.
I'm making answer because of limit in comments and partial results I've made. Here is the result of 'grep -r 'ocamlnet' *' at '~/.opam' directory: http://pastebin.com/8cJqMXDY by looking at lines 1-90 we may conclude that there is actually no ocamlnet library at all (or I'm looking for it in wrong place - but as I wrote in comment everything were installed using opam - I'd be glad to hear some opinions on this subject). These suspicions may be partially confirmed in two ways:
in fact in lines 1-90 we've all binaries of ocamlnet components (http://projects.camlcity.org/projects/dl/ocamlnet-3.7.3/doc/html-main/index.html)
lines 90-* doesn't seem like something other than some files needed to manage this package using opam. E.g.
~/.opam/repo/default/packages/ocamlnet$ tree -r .
.
├── ocamlnet.3.7.3
│ ├── url
│ ├── opam
│ ├── files
│ │ └── ocamlnet.install
│ └── descr
├── ocamlnet.3.6.5
│ ├── url
│ ├── opam
│ ├── files
│ │ ├── ocamlnet.install
│ │ ├── netpop.patch
│ │ ├── nethttpd_types.patch
│ │ └── cloexec.patch
│ └── descr
├── ocamlnet.3.6.3
│ ├── url
│ ├── opam
│ ├── files
│ │ └── ocamlnet.install
│ └── descr
├── ocamlnet.3.6.0
│ ├── url
│ ├── opam
│ ├── files
│ │ ├── ocamlnet-ocaml4.diff
│ │ └── ocamlnet.install
│ └── descr
├── ocamlnet.3.5.1
│ ├── url
│ ├── opam
│ ├── files
│ │ └── ocamlnet.install
│ └── descr
└── ocamlnet.3.2.1
├── url
├── opam
├── files
│ └── ocamlnet.install
└── descr
I do not have a sufficiently large knowledge to go into it deeper but it looks for me like that ocamlnet become just a shortcut for a few another packages used by opam. Especially that after changing
-package ocamlnet
to exact module which I'm using
-package netstring
everything has compiled fine. I'm still open to any other solutions or explenations for the curious case of ocamlnet package (and Michael's hints) B).
Related
I'm trying to use the PocketSphinx speech to text library in my project as a Git submodule. So, I added the submodule to my dependency folder and I added the following code to my MakeFile:
add_subdirectory(dependencies/pocketsphinx)
But, when I'm building the project, I'm getting an error saying that:
[build] /home/aniket/code/restapi/dependencies/pocketsphinx/src/allphone_search.c:43:10: fatal error: pocketsphinx.h: No such file or directory
[build] 43 | #include <pocketsphinx.h>
[build] | ^~~~~~~~~~~~~~~~
[build] compilation terminated.
My guess is that CMAKE cannot find the header files; but, when I build PocketSphinx alone it works fine.
I'm also using the JsonCpp library, which compiles without any problem.
My CMAKE file is:
cmake_minimum_required(VERSION 3.2.0)
project(assistant)
add_executable(${PROJECT_NAME} src/main.cpp)
add_subdirectory(dependencies/jsonpp)
add_subdirectory(dependencies/pocketsphinx)
target_include_directories(${PROJECT_NAME} PUBLIC include PUBLIC
dependencies/jsonpp/include)
target_include_directories(${PROJECT_NAME} PRIVATE include
PRIVATE dependencies/jsonpp/include)
target_link_directories(${PROJECT_NAME} PRIVATE build/lib)
target_link_libraries(${PROJECT_NAME} PRIVATE jsoncpp curl)
Here's my directory structure:
.
├── build
├── dependencies
│ ├── jsonpp
│ │ ├── cmake
│ │ ├── devtools
│ │ ├── doc
│ │ ├── example
│ │ ├── include
│ │ ├── pkg-config
│ │ ├── src
│ │ └── test
│ └── pocketsphinx
│ ├── cython
│ ├── docs
│ ├── doxygen
│ ├── examples
│ ├── gst
│ ├── include
│ ├── model
│ ├── programs
│ ├── src
│ └── test
├── include
└── src
Running go test -v ./... from the root directory misses some tests in the folders of my project.
My file structure is as follows
.
├── cmd
│ └── Myapp
│ ├── config_test.go
│ └── main.go
├── config.yml
├── Myapp
├── go.mod
├── go.sum
├── images
│ └── Dockerfile.dev
├── kubernetes
│ └── deployment.yml
├── LICENSE.md
├── pkg
│ └── AFolder
│ ├── Bookmark.go
│ ├── Bookmark_test.go
│ ├── go.mod
│ ├── go.sum
│ ├── end.go
│ ├── end.go
│ ├── Handler.go
│ ├── Handler_test.go
│ ├── UpdateHandler.go
│ └── UpdateHandler_test.go
├── README.md
├── renovate.json
└── skaffold.yaml
The location is ../go/src/Myapp and the go path is ../go
I can build from the root fine with go build ./... and creates a binary.
But running go test -v ./... will only run the tests in the config_test.go and miss out on the tests in the pkg subfolders.
A problem with the file structure would be my first suspicion. But I am not sure how to go about fixing it. Any advice would be appreciated.
The problem is caused by go.mod file in AFolder folder.
The ./... pattern matches all the packages within the current module. AFolder is not the current module as it has its own go.mod file. In other words, unit test exclude all the subfolders with a go.mod file.
go test all will test all the dependencies. However, this is time consuming and might be unnecessary .
In my project, I just create go.mod file in root folder. For example:
In root folder run go mod init github.com/cyrilzh/myproject
Create sub folder "sub" and create code with package named sub
In the project, reference the sub package like this:
import "github.com/cyrilzh/myproject/sub"
For more information, please refer to
golang wiki: how to define a module
I have two folders in my main directory, cloud_functions and cloudbuild:
├── cloud_functions
│ ├── batch_predict
│ │ ├── config.py
│ ├── main.py
│ ├── requirements.txt
│ └── utils.py
├── cloudbuild
│ ├── batch_predict_cloud_function
│ │ ├── config.yaml
│ │ ├── create_triggers.sh
In the Cloud build trigger I specified the glob patterns as:
cloud_functions/batch_predict/**
cloudbuild/batch_predict_cloud_function/**
This is accomplished with the following flag in the gcloud command that creates the trigger:
--included-files="cloud_functions/batch_predict/**, cloudbuild/batch_predict_cloud_function/**" \
I validated the globs are registered in the UI but changes in the cloudbuild folder don't trigger the build, any ideas why this might be happening?
To specify multiple glob patterns in the gcloud command you have to pass multiple arguments to the --included-files option using the following syntax:
--included-files "cloud_functions/batch_predict/**","cloudbuild/batch_predict_cloud_function/**"
I'm in the process of putting together a small c++ project using CMake for the first time. My current project structure is
├── bch
│ ├── CMakeLists.txt
│ ├── gf
│ │ ├── CMakeLists.txt
│ │ ├── include
│ │ │ └── gf.h
│ │ └── src
│ │ └── gf.cpp
│ ├── include
│ │ └── bch.h
│ └── src
│ └── bch.cpp
├── bsc
│ ├── CMakeLists.txt
│ ├── include
│ │ └── bsc.h
│ └── src
│ └── bsc.cpp
├── CMakeLists.txt
├── .gitignore
└── main.cpp
Currently I have gf as a subdirectory of bch. The contents of bch/CMakeLists is
cmake_minimum_required(VERSION 3.17)
project(bch VERSION 0.1.0)
# Targets
add_library(bch STATIC src/bch.cpp)
# Dependant
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/gf)
target_link_libraries(bch PUBLIC gf)
# Export to dependants
target_include_directories(bch PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include)
I would like to take the gf CMake project and place outside of the directory path of bch. This doesn't seem to be a supported structure when using the add_subdirectory command unless I'm missing something. Generally, what would be the current "Best Practice" for accomplishing my goal of decoupling the directory structure from the dependency tree?
If you want to decouple the project from dependencies, then I would suggest splitting cmake project into two separate, exporting the dependent target and then importing it with 'find_package'. Here is quick google find for that topic:
https://visualgdb.com/tutorials/linux/cmake/find_package
[edit]
For a more general approach I suggest cmake documentation:
https://cmake.org/cmake/help/latest/command/find_package.html#command:find_package
https://cmake.org/cmake/help/latest/module/CMakePackageConfigHelpers.html
https://cmake.org/cmake/help/v3.18/command/install.html#export
The idea is:
In a dependency project to generate a '*ConfigVersion.cmake' file, install all needed files (headers, binaries and *ConfigVersion.cmake) using the 'EXPORT' in parameter the 'install' command.
In a final project use 'find_package' to import the dependency.
For bigger library projects I also suggest using namespaces to allow importing only selected parts of the library.
I use conda install OpenCV use:
conda install -c conda-forge opencv
I can use OpenCV with python with no error.
Since conda is a convenience tool to build OpenCV, I am wondering whether I can use OpenCV installed by conda with C++.
And how to use it?
I have opencv.hpp in /home/kandy/miniconda3/include/boost/compute/interop, and here it's what the folder contains:
.
├── eigen
│ └── core.hpp
├── eigen.hpp
├── opencv
│ ├── core.hpp
│ ├── highgui.hpp
│ └── ocl.hpp
├── opencv.hpp
├── opengl
│ ├── acquire.hpp
│ ├── cl_gl_ext.hpp
│ ├── cl_gl.hpp
│ ├── context.hpp
│ ├── gl.hpp
│ ├── opengl_buffer.hpp
│ ├── opengl_renderbuffer.hpp
│ └── opengl_texture.hpp
├── opengl.hpp
├── qt
│ ├── qimage.hpp
│ ├── qpointf.hpp
│ ├── qpoint.hpp
│ ├── qtcore.hpp
│ ├── qtgui.hpp
│ └── qvector.hpp
├── qt.hpp
├── vtk
│ ├── bounds.hpp
│ ├── data_array.hpp
│ ├── matrix4x4.hpp
│ └── points.hpp
└── vtk.hpp