How to stop Scons adding lib infront of a shared library - c++

I've been playing around with Scons on OSX and I'm trying to make a Shared Library (.dll, .so, .dylib).
It's all worked perfectly except for one thing that is really annoying me, it adds 'lib' in front of the library name. For example, I choose the name WL and it becomes libWL.dylib. I cannot work out why Scons does this and it is driving me mad.
The code I am using is:
# -*- coding: utf-8 -*-
import os
SourceList = ['Window.cpp']
env = Environment(ENV = os.environ)
#Libraries we are using
Targets = 'WL'
libraries = ['SDL2']
#Paths to the libraries and include paths
Paths = ['/usr/local/lib', '/usr/local/include']
Export('SourceList env libraries Paths Targets')
SConscript('src/SConscript', variant_dir='bin', duplicate=0)
and
Import('SourceList env libraries Paths Targets')
SharedLibrary(target = Targets,source = SourceList,LIBS = libraries, LIBPATH=Paths)
I'm not super knowledgeable about how shared libraries work so I don't know if I could just change the name after it's compiled. But I would like it to just not add in letters

In each environment, SCons uses variables to specify the prefixes and suffixes of things like libraries and programs. These variables get initialized, based on the detected platform that it's currently running on...but you can simply overwrite this setting after the call of the Environment() constructor:
env = Environment()
env['SHLIBPREFIX'] = ''
For "darwin"-like systems, SCons calls the standard "posix" initialization first...that's where the default "lib" prefix comes from.
Tip: You can treat an Environment very much like a dictionary (hash map), and set its values how you need them to be. For displaying its current contents you can use the Dump() method:
print env.Dump()
in a SConstruct/SConscript which gives you a full listing of the defined variables.
You can find a list of standard variables in the MAN page ( http://scons.org/doc/production/HTML/scons-man.html ) and the UserGuide ( http://scons.org/doc/production/HTML/scons-user.html ).

Related

How can I implement a command line option in Bazel to switch between which dependency version is used for building?

For some background, the C++ program I am working on has the possibility to interoperate with some other applications that use various Protobuf versions. In the source code for my program, I have the compiled .pb.cc files from these other applications for the Protobuf interface. These .pb.cc files were compiled with a particular version of Protobuf, and I don't have any control over this. I am using Bazel to build, and I want to be able to specify a Bazel build configuration for my program, which will use a particular version of Protobuf which matches that of one of the possible other applications.
Originally, I wanted to put something in the .bazelrc file so that I can specify a particular version of Protobuf depending on the config, for example:
# in .bazelrc:
build:my_config --protobuf_version=3_20_1
build:my_other_config --protobuf_version=3_21_6
Then from the terminal, I could build with the command
bazel build --config=my_config //path/to/target:target
which would build as if I had typed
bazel build --protobuf_version=3_20_1 //path/to/target:target
At this point, I wanted to use the select() function, as detailed in the Bazel docs for Configurable Build Attributes, to use a particular Protobuf version during building. But, the Protobuf dependencies are all specified in the WORKSPACE file, which is more limited than a BUILD file, and this select() function cannot be used there. So then my idea was to pull in every version of the Protobuf library that I would possibly need, and give them different names in the WORKSPACE file, and then in the BUILD files, use a select() function to choose the correct version. But, the Bazel rule for compiling the proto_library is used as such:
proto_library(
name = "foo",
srcs = ["foo.proto"],
strip_import_prefix = "/foo/bar/baz",
)
I don't see of any opportunity to use a select() function here to specify which Protobuf version's proto_library rule should be used. The proto_library rule is also defined in from the WORKSPACE file with:
load("#rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()
Now, I would say that I am stuck. I don't see a way to specify on the command line which version of Protobuf should be used with the proto_library rule.
In the end, I would like a way to do the equivalent in the WORKSPACE file of
# in WORKSPACE
if my_config:
# specific protobuf version:
http_archive(
name = "com_google_protobuf",
sha256 = "8b28fdd45bab62d15db232ec404248901842e5340299a57765e48abe8a80d930",
strip_prefix = "protobuf-3.20.1",
urls = ["https://github.com/protocolbuffers/protobuf/archive/v3.20.1.tar.gz"],
)
elif my_other_config:
# same as above, but with different version
else:
# same as above, but with default version
According to some google groups discussion, this doesn't seem to be possible in the WORKSPACE file, so I would need to do it in a BUILD file, but the dependencies are specified in the WORKSPACE.
I figured out a way that works that seems to go against Bazel's philosophy, but most importantly does what I want.
The repository dependencies are loaded in the first of two steps, the first involving the WORKSPACE file, and the second involving the BUILD file. Command line flags for the build cannot be normally be directly passed to the WORKSPACE, but it is possible to get some information to the WORKSPACE by setting an environment variable and creating a repository_rule. In the WORKSPACE, this environment variable can be used, for example, to change the url argument to http_archive which specifies the dependency version.
This repository rule is created in a separate file .bzl file, which is then loaded in the WORKSPACE. As a generalized example of how get environment variable values into the WORKSPACE, the following file my_repository_rule.bzl could be created:
# in file my_repository_rule.bzl
def _my_repository_rule_impl(repository_ctx):
# read the particular environment variable we are interested in
config = repository_ctx.os.environ.get("MY_CONFIG_ENV_VAR", "")
# necessary to create empty BUILD file for this rule
# which will be located somewhere in the Bazel build files
repository_ctx.file("BUILD")
# some logic to do something based on the value of the environment variable passed in:
if config.lower() == "example_config_1":
ADDITIONAL_INFO = "foo"
elif config.lower() == "example_config_2":
ADDITIONAL_INFO = "bar"
else:
ADDITIONAL_INFO = "baz"
# create a temporary file called config.bzl to be loaded into WORKSPACE
# passing in any desired information from this rule implementation
repository_ctx.file("config.bzl", content = """
MY_CONFIG = {}
ADDITIONAL_INFO = {}
""".format(repr(config), repr(ADDITIONAL_INFO ))
)
my_repository_rule = repository_rule(
implementation=_my_repository_rule_impl,
environ = ["MY_CONFIG_ENV_VAR"]
)
This can be used in the WORKSPACE as such:
# in file WORKSPACE
load("//:my_repository_rule.bzl", "my_repository_rule ")
my_repository_rule(name = "local_my_repository_rule ")
load("#local_my_repository_rule //:config.bzl", "MY_CONFIG", "ADDITIONAL_INFO")
print("MY_CONFIG = {}".format(MY_CONFIG))
print("ADDITIONAL_INFO = {}".format(ADDITIONAL_INFO))
When a target is built with bazel build, the WORKSPACE will receive the value of the MY_CONFIG_ENV_VAR from the terminal and store it in the Starlark variable MY_CONFIG, and any other additional information determined in the implementation.
The environment variable can be passed by normal means, such as typing in a bash shell, for example:
MY_CONFIG_ENV_VAR=example_config_1 bazel build //path/to/target:target
It can also be passed as a flag with the --repo_env flag. This flag sends an extra environment variable to be available to the repository rules, meaning the following is equivalent:
bazel build --repo_env=MY_CONFIG_ENV_VAR=example_config_1 //path/to/target:target
This can be made easier to switch between by including the following in the .bazelrc file:
# in file .bazelrc
build:my_config_1 --repo_env=MY_CONFIG_ENV_VAR=example_config_1
build:my_config_2 --repo_env=MY_CONFIG_ENV_VAR=example_config_2
So running bazel build --config=my_config_1 //path/to/target:target will show the debug output from the print statements in WORKSPACE as the following:
MY_CONFIG = example_config_1
ADDITIONAL_INFO = foo
If ADDITIONAL_INFO in the rule implementation (in the file my_repository_rule.bzl) were set to a version number such as "3.20.1", then the WORKSPACE could, for example, use this in an http_archive call to pull the desired version of the dependency.
# in file WORKSPACE
if ADDITIONAL_INFO == "3.20.1":
sha256 = "8b28fdd45bab62d15db232ec404248901842e5340299a57765e48abe8a80d930"
http_archive(
name = "com_google_protobuf",
sha256 = sha256,
strip_prefix = "protobuf-{}".format(ADDITIONAL_INFO),
urls = ["https://github.com/protocolbuffers/protobuf/archive/v{}.tar.gz".format(ADDITIONAL_INFO)],
)
Of course, the value of the sha256 kwarg could also be passed in from the repository rule as a separate string variable, or as part of a dictionary, for example.

Adding include path to Waf configuration (C++)

How can I add a include path to wscript?
I know I can declare which files from which folders I want to include per any cpp file, like:
def build(bld):
bld(features='c cxx cxxprogram',
includes='include',
source='main.cpp',
target='app',
use=['M','mylib'],
lib=['dl'])
but I do not want to set it per every file. I want to add a path to "global includes" so it will be included everytime any file will be compiled.
I've found an answer. You have to simply set the value of 'INCLUDES' to list of paths you want.
Do not forget to run waf configure again :)
def configure(cfg):
cfg.env.append_value('INCLUDES', ['include'])
I spent some time working out a good way to do this using the "use" option in bld.program() methods. Working with the boost libraries as an example, I came up with the following. I hope it helps!
'''
run waf with -v option and look at the command line arguments given
to the compiler for the three cases.
you may need to include the boost tool into waf to test this script.
'''
def options(opt):
opt.load('compiler_cxx boost')
def configure(cfg):
cfg.load('compiler_cxx boost')
cfg.check_boost()
cfg.env.DEFINES_BOOST = ['NDEBUG']
### the following line would be very convenient
### cfg.env.USE_MYCONFIG = ['BOOST']
### but this works too:
def copy_config(cfg, name, new_name):
i = '_'+name
o = '_'+new_name
l = len(i)
d = {}
for key in cfg.env.keys():
if key[-l:] == i:
d[key.replace(i,o)] = cfg.env[key]
cfg.env.update(d)
copy_config(cfg, 'BOOST', 'MYCONFIG')
# now modify the new env/configuration
# this adds the appropriate "boost_" to the beginning
# of the library and the "-mt" to the end if needed
cfg.env.LIB_MYCONFIG = cfg.boost_get_libs('filesystem system')[-1]
def build(bld):
# basic boost (no libraries)
bld.program(target='test-boost2', source='test-boost.cpp',
use='BOOST')
# myconfig: boost with two libraries
bld.program(target='test-boost', source='test-boost.cpp',
use='MYCONFIG')
# warning:
# notice the NDEBUG shows up twice in the compilation
# because MYCONFIG already includes everything in BOOST
bld.program(target='test-boost3', source='test-boost.cpp',
use='BOOST MYCONFIG')
I figured this out and the steps are as follows:
Added following check in the configure function in wscript file. This tells the script to check for the given library file (libmongoclient in this case), and we store the results of this check in MONGOCLIENT.
conf.check_cfg(package='libmongoclient', args=['--cflags', '--libs'], uselib_store='MONGOCLIENT', mandatory=True)
After this step, we need to add a package configuration file (.pc) into /usr/local/lib/pkgconfig path. This is the file where we specify the paths to lib and headers. Pasting the content of this file below.
prefix=/usr/local
libdir=/usr/local/lib
includedir=/usr/local/include/mongo
Name: libmongoclient
Description: Mongodb C++ driver
Version: 0.2
Libs: -L${libdir} -lmongoclient
Cflags: -I${includedir}
Added the dependency into the build function of the sepcific program which depends on the above library (i.e. MongoClient). Below is an example.
mobility = bld( target='bin/mobility', features='cxx cxxprogram', source='src/main.cpp', use='mob-objects MONGOCLIENT', )
After this, run the configure again, and build your code.

How to build in different environments with SCons?

I just discovered SCons, a great build tool.
I need to build my project in multiple environments, i.e. with different library paths and include paths depending on the machine.
Since SConstruct has all of Python available, I can imagine various ways to accomplish this. One possibility would be to have a single SConstruct script and instantiate multiple Environment objects.
envFoo = Environment()
envFoo.Append(CPPPATH = [...])
envBar = Environment()
envBar.Append(CPPPATH = [...])
Then select one of these Environment objects somehow, possibly with a command line parameter to scons.
A question for experienced scons users: Is this the way to go? What is the most convenient way of doing this?
This would definitely work, and would probably be a good solution for simpler situations. I have a more complex situation where I actually create an EnvironmentFactory using Python classes, something like this:
env = EnvironmentFactory.createEnv(cmdLineArgs)
Another useful SCons tool would be to "automatically" populate the Environment from the command line options using the ParseFlags(), ParseConfig(), and MergeFlags() as described in this chapter.
You could do this
envSelect = ARGUMENTS.get('env', "default")
if envSelect == "default":
env = Environment()
elif envSelect == "env2":
env = Environment(whatever you want to do here)
elif envSelect == "env3":
env = Environment(whatever you want to do here)
else:
env = Environment(whatever you want to do here)
With the ARGUMENTS.get you specify a default when you just run the command scons
or else you you do scons env=env2 hope this answers your question
EDIT : here's a snippet from a project I'm working on where I want to determine the os when scons runs
import sys
import os
import glob
import subprocess
#Find the host Operating System
platform = sys.platform
if platform != "win32":
env = Environment()
else:
#Specify to compile in 32-bit mode for Visual Studio
#This is needed as Qt libraries on Windows for Visual
#Studio are 32-bit only
env = Environment(TARGET_ARCH = 'x86')
#Get Qt directory as scons argument or use default setting
if platform != "win32":
qtDir = ARGUMENTS.get('qt', '/usr/local/Trolltech/Qt-4.8.4/')
else:
qtDir = ARGUMENTS.get('qt', 'C:\\Qt\\4.8.4\\')

How to find "my" lib directory?

I'm developing a C++ program under Linux. I want to put some stuff (to be specific, LLVM bitcode files, but that's not important) in libraries, so I want the following directory structure:
/somewhere/bin/myBin
/somewhere/lib/myLib.bc
How do I find the lib directory? I tried to compute a relative part from argv[0], but if /somewhere is in my PATH, argv[0] will just contain myBin. Is there some way to get this path? Or do I have to set it at compile time?
How do GNU autotools deal with this? What happens exactly if I supply the --prefix option to ./configure?
Edit: The word library is a bit misleading in my case. My library consist of LLVM bitcode, so it's not an actual (shared) object file, just a file I want to open from my program. You can think of it as an image or text file.
maybe what you want is :
/usr/lib
unix directory reference: http://www.comptechdoc.org/os/linux/usersguide/linux_ugfilestruct.html
Assume your lib directory is "../lib" relative to executable
First you need to identify where myBin located, You can get it by reading /proc/self/exe
Then concat your binary file path with "../lib" will give you the lib directory.
You will have to use a compiler flag to tell the program. For example, if you have a plugin dir:
# Makefile.am
AM_CPPFLAGS = -DPLUGIN_DIR=\"${pkglibdir}\"
bin_PROGRAMS = awesome_prog
pkglib_LTLIBRARIES = someplugin.la
The list of directories to be searched is stored in the file /etc/ld.so.conf.
In Linux, the environment variable LD_LIBRARY_PATH is a colon-separated set of directories where libraries should be searched for first, before the standard set of directories; this is useful when debugging a new library or using a nonstandard library for special purposes.
LD_LIBRARY_PATH is handy for development and testing:
$ export LD_LIBRARY_PATH=/path/to/mylib.so
$ ./myprogram
[read more]
Addressing only the portion of the question "how to GNU autotools deal with this?"...
When you assign a --prefix to configure, basically two things happen: 1) it instructs the build system that everything is to be installed in ${prefix}, and 2) it looks in ${prefix}/share/config.site for any additional information about how the system is set up (it is common for that file not to exist.) It does absolutely nothing to help find libraries, but depends on the user having set up the tool chain properly. If you want to use a library in /foo/lib, you must have your toolchain set up to look there (eg, by putting /foo/lib in /etc/ld.so.conf, or by putting -L/foo/lib in LDFLAGS and "/foo/lib" in LD_LIBRARY_PATH)
The configure script relies on you to have the environment set up. It does not help you set up that environment, but does help by alerting you that you have not done so.
You could use the readlink system call on /proc/self/exe to get the path of your executable. You might then use realpath etc.

Where Should Shared Object Files Be Placed?

I am venturing into the land of creating C/C++ bindings for Python using pybindgen. I've followed the steps outlined under "Building it ( GCC instructions )" to create bindings for the sample files:
http://packages.python.org/PyBindGen/tutorial.html#a-simple-example
Running make produces a .so file. If I understand how .so files work, I should be able to import the classes in the shared object into Python. However, I'm not sure where to place the file and how to let Python know where it is. Additionally, do the original c/c++ source files need to accompany the .so file?
So far I've tried placing the file in /usr/local/lib and adding that path to DYLD_LIBRARY_PATH to the .bash_profile. When I try to import the module from within the Python interpeter an error is thrown stating that the module can not be found.
So, my question is: What needs to be done with the generated .so file in order for it to be used by a Python program?
Python looks for .so modules in the same directories where it searches python ones. So you have to install it as you would normal python module either somewhere that is on python's sys.path by default (/usr/share/python/site-lib or something like that—it'd distribution-dependent) or add the directory to PYTHONPATH environment variable.
It's python that's loading the module using dlopen, not the dynamic linker, so LD_LIBRARY_PATH (note, there is no DY) won't help you.
Same as all other Python modules. It must be within one of the locations given in sys.path.