c-source is not a good relative path - hackage

I try to upload a package to hackage and get the brief message
c-sources: C/libqhull_r.c' is not good relative path: same directory segment: ..
I cannot find (with google) any hint what is meant and what I should change (I am not the author of the code, but have permission of the author to upload to hackage).
The cabal file contains
include-dirs: C
C-sources: C/libqhull_r.c
, C/geom_r.c
, C/geom2_r.c
and builds with cabal build.

Related

Include C++ library in Bazel project

I'm currently messing around with Google's Mediapipe, which uses Bazel as a build tool. The folder has the following structure:
mediapipe
├ mediapipe
| └ examples
| └ desktop
| └ hand_tracking
| └ BUILD
├ calculators
| └ tensor
| └ tensor_to_landmarks_calculator.cc
| └ BUILD
└ WORKSPACE
There are a bunch of other files in there as well, but they are rather irrelevant to this problem. They can be found in the git repo linked above if you need them.
I'm at a stage where I can build and run the hand_tracking example without any problems. Now, I want to include the cereal library to the build, so that I can use #include <cereal/archives/binary.hpp> from within tensors_to_landmarks_calculator.cc. The cereal library is located at C:\\cereal, but can be moved to other locations if it simplifies the process.
Basically, I'm looking for the Bazel equivalent of adding a path to Additional Include Directories in Visual Studio.
How would I need to modify the WORKSPACE and BUILD files in order to include the library in my project, assuming they are in a default state?
Unfortunately, this official doc page only covers one-file libraries, and other implementations kept giving me File could not be found errors at build time.
Thanks in advance!
First you have to tell Bazel about the code living "outside" the
workspace area. It needs to know how to find it, how to build it, and
what to call it, etc. These are known as remote repositories. They
can be local to your disk (outside the Bazel workspace area), or
actually remote on another machine or server, like github. The
important thing is it must be described to Bazel with enough
information that it can use.
As most third party code does not come with BUILD.bazel files, you may
need to provide one yourself and tell Bazel "use this as if it was a
build file found in that code."
For a local directory outside your bazel project
Add a repository rule like this to your WORKSPACE file:
# This could go in your WORKSPACE file
# (But prefer the http_archive solution below)
new_local_repository(
name = "cereal",
build_file = "//third_party:cereal.BUILD.bazel",
path = "<path-to-directory>",
)
("new_local_repository" is built-in to bazel)
Somewhere under your Bazel WORKSPACE area you'll also need to make a
cereal.BUILD.bazel file and export it from the package. I choose a directory called //third_party, but you can put it anywhere
else, and name it anything else, as long as the repository rule
provides a proper bazel label for it.) The contents might look like
this:
# contents of //third_party/cereal.BUILD.bazel
cc_library(
name = "cereal-lib",
srcs = glob(["**/*.hpp"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
Bazel will pretend this was the BUILD file that "came with" the remote
repository, even though it's actually local to your repo. When Bazel fetches this remote repostiory code it copies it, and the BUILD file you provide, into its external area for caching, building, etc.
To make //third_party:cereal.BUILD.bazel a valid target in your directory, add a BUILD.bazel file to that directory:
# contents of //third_party/BUILD.bazel
exports_files = [
"cereal.BUILD.bazel",
]
Without exporting it, you won't be able to refer to the buildfile from your repository rule.
Local disk repositories aren't very portable since people may have
different versions installed and it's not very hermetic (making it
hard to share caches of builds with others), and it requires they put
them in the same place, and that kind of setup can be problematic. It
also will fail when you mix operating systems, etc, if you refer to it as "C:..."
Downloading a tarball of the library from github, for example
A better way is to download a fixed version from github, for example,
and let Bazel manage it for you in its external area:
http_archive(
name = "cereal",
sha256 = "329ea3e3130b026c03a4acc50e168e7daff4e6e661bc6a7dfec0d77b570851d5",
urls =
["https://github.com/USCiLab/cereal/archive/refs/tags/v1.3.0.tar.gz"],
build_file = "//third_party:cereal.BUILD.bazel",
)
The sha256 is important, since it downloads and computes it, compares to what you specified, and can cache it. In the future, it won't re-download it if the local file's sha matches.
Notice, it again says build_file = //third_party:cereal.BUILD.bazel., all
the same things from new_local_repository above apply here. Make sure you provide the build file for it to use, and export it from where you put it.
*To test that the remote repository is setup ok
on the command line issue
bazel fetch #cereal//:cereal-lib
I sometimes have to clear it out to make it try again, if my rule isn't quite right, but the "bad" version sticks around.
bazel clean --expunge
will remove it, but might be overkill.
Finally
We have:
defined a remote repository called #cereal
defined a target in it called cereal-lib
the target is thus #cereal//:cereal-lib
To use it
Go to the package where you would like to include cereal, and add a
dependency on this repository to the rule that builds the c++ code that would like to use cereal. That is, in your case, the BUILD rule that causes tensor_to_landmarks_calculator.cc to get built, add:
deps = [
"#cereal//:cereal-lib",
...
]
And then in your c++ code:
#include "cereal/cereal.hpp"
That should do it.

R-package installation from GitHub: changes when R package is in a subdir and C++ code in root

Installation of my R package from GitHub fails when using the devtools::install_github function, while the package is in a subdirectory and the C++ code is located at root.
The repository can be found here: https://github.com/Blunde1/gbtorch
The master branch has the C++ code located in the R package folder. However, as I want to build a Python package later, it makes sense to move the C++ header files to the root. I modified this in a new branch: https://github.com/Blunde1/gbtorch/tree/Restructuring-Header-Files
Locally, this works after modifying the Makevars files with
PKG_CPPFLAGS = -I../inst/include
being changed to
PKG_CPPFLAGS = -I../../inst/include
I therefore assume a clone and manual installation should work, but I want this to be easy and hope that installation using devtools::install_github will be possible.
I am most likely ignorant of something, as this is not a new problem. I researched (read "googled") and found that the devtools team indeed has thought of this: https://github.com/r-lib/devtools/issues/64 which led to the subdir argument in devtools::install_github.
Here it seems the issue is solved, but still, the solution with subdir does not work for me:
This works: master branch, C++ located in R-package folder
devtools::install_github("Blunde1/gbtorch", ref="master", subdir = "R-package")
This fails: new development branch, C++ code not located at R-package folder
devtools::install_github("Blunde1/gbtorch", ref="Restructuring-Header-Files", subdir = "R-package")
The above command gives the following error:
gbtorch.cpp:8:23: fatal error: gbtorch.hpp: No such file or directory
#include "gbtorch.hpp"
If the header files' location path is not set in the Makevars file, then where? Is it possible that some configure file might do the trick?
Any ideas on how to fix this? Any help on the subject is greatly appreciated!

Error when running fmu example in JModelica 2.4 User's Guide: could not find file

I follow word by word Jmodelica user's manual for installation and when running the fmu example in IPython I get:
from pymodelica import compile_fmu
from pyfmi import load_fmu
my_fmu = compile_fmu('RLC_Circuit','RLC_Circuit.mo')
Could not find file: RLC_Circuit.mo (The system cannot find the file specified)
The file RLC_Circuit.mo is present in a folder but apparently "the system cannot find it". So how to add a path of the parent folder ?
Have a look at RLC.py in install/Python{_64}/pyjmi/examples. There you can see how to add the path to the .mo file in compile_fmu.

LLDB - setting source code path

According to the official guideline of lldb, the ability to view source code during debug session (using the command source list) is done by setting new pathname for source files.
i.e. if i compiled my project in /tmp on one computer and deployed it on another computer where the source code reside in /Users/Src/ , i should type settings set target.source-map /tmp /Users/Src from running lldb in the deployment machine.
However, what happens if i got the executable from someone else, and don't know the build directory. and maybe the source-code is organized differently from where is was built (but the file contents is the same).
my questions are :
Does lldb know how to search for matching source file recursively in the supplied path ?
How can I get the original pathname form the mach-o executable ?
here's the formal description of the command :
Remap source file pathnames for the debug session. If your source files are no longer located in the same location as when the program was built --- maybe the program was built on a different computer --- you need to tell the debugger how to find the sources at their local file path instead of the build system's file path.
If you know a function name in the code in question, do:
(lldb) image lookup -vn <FunctionName> <BinaryImageNameContainingFunction>
and look for the CompileUnit entry. The path given there is the path lldb got from the debug information.

Installing SML/NJ library

I need to install QCheck/SML unit test library for ML.
I could git clone the code, and create the .cm file, but I'm not sure how to copy the generated file into where. The document simply says (http://contrapunctus.net/league/haques/qcheck/qcheck_2.html):
2.1 SML/NJ
For Standard ML of New Jersey, the CM library specification ‘qcheck.cm’ should be all you need. The default target of make -f
Makefile.nj will ask CM to build and stabilize this library. This
creates a file ‘.cm/x86-unix/qcheck.cm’ (alter the arch/os tag as
needed) which may be copied into the standard CM library path and
added to the ‘pathconfig’.
I used brew install smlnj for the ML installation in Mac, so I have SMLNJ_HOME at /usr/local/Cellar/smlnj/100.78/SMLNJ_HOME.
What is the CM path library in this? In general, how to install a library into SML/NJ?
Edit
From Matt's answer, this is how I made it work.
Setup
Copy the whole qcheck directory into /usr/local/Cellar/smlnj/110.78/SMLNJ_HOME/lib.
Make ~/.smlnj-pathconfig file.
Add qcheck.cm /usr/local/Cellar/smlnj/110.78/SMLNJ_HOME/lib/qcheck in the file.
Usage (in REPL)
CM.make "$/qcheck.cm";
open QCheck;
Things to consider.
I couldn't use the stabilized libraries (qcheck/.cm/x86-unix/qcheck.cm). So, I had to copy the whole directory.
For user's library, I think the install location can be anywhere, as the ~/.smlnj-pathconfig can point to the directory.
For importing a structure in the same directory, use "FILENAME"; is needed instead of CM.make.
The CM library path is located in SMLNJ_HOME/lib. You can place the .cm file here. The instructions say to modify the pathconfig file, however, I would suggest creating a .smlnj-pathconfig file in your home directory instead. You are going to want to then paste the following line into that file:
qcheck.cm <path to directory containing qcheck.cm file>
You can then reference this in one of your .cm files using the anchor name: $/qcheck.cm. I've not used stabilized libraries before, and the generated .cm file is giving me a bunch of errors. If you instead use the qcheck.cm file from the root directory of the qcheck repo, it seems to work for me. Perhaps someone else can comment on why I am getting these errors.