making qt ignore specific header include files - c++

I have a running project made in qt . For building purpose I m using waf build tool. To get the same project up and running from waf I need to add
#include "file.moc"
at the end of some files to avoid undefined reference. But if these includes are not commented in qt I get can not find file errors. How do you make qt ignore certain file includes. I thought something like this should have done the trick
#ifndef Q_MOC_RUN
#include "file.moc"
#endif

Due to the limited information provided, all I can show is what waf can do:
You either can include the moc files or the unprocessed files.
Examples are included in the distributed source at https://code.google.com/p/waf/downloads/detail?name=waf-1.6.11.tar.bz2
Subdirectories:
playground/slow_qt/
demos/qt4/
For the sake of completness, simplified examples:
default includes
def options(opt):
opt.load('compiler_cxx qt4')
def configure(conf):
conf.load('compiler_cxx qt4')
conf.load('slow_qt4')
def build(bld):
bld(
features = 'qt4 cxx cxxprogram',
uselib = 'QTCORE QTGUI QTOPENGL QTSVG',
source = 'some.cpp files.cpp',
includes = '.',
target = 'dummy',
)
moc cpp
def options(opt):
opt.load('compiler_cxx qt4')
def configure(conf):
conf.load('compiler_cxx qt4')
def build(bld):
bld(
features = 'qt4 cxx cxxprogram',
uselib = 'QTCORE QTGUI QTOPENGL QTSVG',
source = 'some.cpp files.cpp',
target = 'dummy',
includes = '.')
)

Related

accessing recources when compiling QT application with Bazel

I'm using Bazel to compile a Qt application (https://github.com/bbreslauer/qt-bazel-example) that is using shaders defined in a qrc file.
When I'm trying to access the resource file, it is not available (as I did not connect the qrc file to the compilation).
How can I define the qrc file content in the build?
UPDATE
following the response by #ypnos, I'm trying to add a macro to my qt.bzl file. I would like the macro to recieve a list of files as an argument, create the (temporary) qrc file, and run the rcc command.
I am currently struggling with:
running a python script in the bzl file is not as straightforward as I though. It cannot generate a file ("open" is undefined). Is it possible? if yes how (see example below)
even with a given qrc file, I cant get the command to work, I guess i'm doing somthing wrong with the command line arguments but I cant find refrence/manual for that
this is what I got so far(my qt.bzl file)
...
def qt_resource(name,file_list, **kwargs):
## following doesnt work inside the bzl file:
# fid = open('%s.qrc' % name, 'w')
# fid.write("<RCC>\n")
# fid.write("\t<qresource prefix=\"/%s\">\n" % name)
# for x in file_list:
# fid.write("\t\t<file>%s</file>\n" % x)
# fid.write("\t</qresource>\n")
# fid.write("</RCC>\n")
# fid.close()
native.genrule(
name = "%s_res" % name,
outs = ["rcc_%s.cpp" % name],
cmd = "rcc %s.qrc -o $#/rcc_%s.cpp"%(name,name) ,
)
srcs = [":rcc_%s.cpp" % name]
native.cc_library(
name = name,
srcs = srcs,
hdrs = [],
deps = [],
**kwargs
)
It seems the bazel example that you are using does not come with support for qrc (it only does moc and ui files).1
QRC files need to be transformed into C++ sources using rcc and then compiled.2 The concept is similar to the one of .ui files which are converted to headers.
Maybe you can patch qt.bzl to add that functionality.

Speeding up build process with distutils

I am programming a C++ extension for Python and I am using distutils to compile the project. As the project grows, rebuilding it takes longer and longer. Is there a way to speed up the build process?
I read that parallel builds (as with make -j) are not possible with distutils. Are there any good alternatives to distutils which might be faster?
I also noticed that it's recompiling all object files every time I call python setup.py build, even when I only changed one source file. Should this be the case or might I be doing something wrong here?
In case it helps, here are some of the files which I try to compile: https://gist.github.com/2923577
Thanks!
Try building with environment variable CC="ccache gcc", that will speed up build significantly when the source has not changed. (strangely, distutils uses CC also for c++ source files). Install the ccache package, of course.
Since you have a single extension which is assembled from multiple compiled object files, you can monkey-patch distutils to compile those in parallel (they are independent) - put this into your setup.py (adjust the N=2 as you wish):
# monkey-patch for parallel compilation
def parallelCCompile(self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None):
# those lines are copied from distutils.ccompiler.CCompiler directly
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(output_dir, macros, include_dirs, sources, depends, extra_postargs)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
# parallel code
N=2 # number of parallel compilations
import multiprocessing.pool
def _single_compile(obj):
try: src, ext = build[obj]
except KeyError: return
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# convert to list, imap is evaluated on-demand
list(multiprocessing.pool.ThreadPool(N).imap(_single_compile,objects))
return objects
import distutils.ccompiler
distutils.ccompiler.CCompiler.compile=parallelCCompile
For the sake of completeness, if you have multiple extensions, you can use the following solution:
import os
import multiprocessing
try:
from concurrent.futures import ThreadPoolExecutor as Pool
except ImportError:
from multiprocessing.pool import ThreadPool as LegacyPool
# To ensure the with statement works. Required for some older 2.7.x releases
class Pool(LegacyPool):
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
self.join()
def build_extensions(self):
"""Function to monkey-patch
distutils.command.build_ext.build_ext.build_extensions
"""
self.check_extensions_list(self.extensions)
try:
num_jobs = os.cpu_count()
except AttributeError:
num_jobs = multiprocessing.cpu_count()
with Pool(num_jobs) as pool:
pool.map(self.build_extension, self.extensions)
def compile(
self, sources, output_dir=None, macros=None, include_dirs=None,
debug=0, extra_preargs=None, extra_postargs=None, depends=None,
):
"""Function to monkey-patch distutils.ccompiler.CCompiler"""
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs
)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
for obj in objects:
try:
src, ext = build[obj]
except KeyError:
continue
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# Return *all* object filenames, not just the ones we just built.
return objects
from distutils.ccompiler import CCompiler
from distutils.command.build_ext import build_ext
build_ext.build_extensions = build_extensions
CCompiler.compile = compile
I've got this working on Windows with clcache, derived from eudoxos's answer:
# Python modules
import datetime
import distutils
import distutils.ccompiler
import distutils.sysconfig
import multiprocessing
import multiprocessing.pool
import os
import sys
from distutils.core import setup
from distutils.core import Extension
from distutils.errors import CompileError
from distutils.errors import DistutilsExecError
now = datetime.datetime.now
ON_LINUX = "linux" in sys.platform
N_JOBS = 4
#------------------------------------------------------------------------------
# Enable ccache to speed up builds
if ON_LINUX:
os.environ['CC'] = 'ccache gcc'
# Windows
else:
# Using clcache.exe, see: https://github.com/frerich/clcache
# Insert path to clcache.exe into the path.
prefix = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(prefix, "bin")
print "Adding %s to the system path." % path
os.environ['PATH'] = '%s;%s' % (path, os.environ['PATH'])
clcache_exe = os.path.join(path, "clcache.exe")
#------------------------------------------------------------------------------
# Parallel Compile
#
# Reference:
#
# http://stackoverflow.com/questions/11013851/speeding-up-build-process-with-distutils
#
def linux_parallel_cpp_compile(
self,
sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
# Copied from distutils.ccompiler.CCompiler
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
def _single_compile(obj):
try:
src, ext = build[obj]
except KeyError:
return
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# convert to list, imap is evaluated on-demand
list(multiprocessing.pool.ThreadPool(N_JOBS).imap(
_single_compile, objects))
return objects
def windows_parallel_cpp_compile(
self,
sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
# Copied from distutils.msvc9compiler.MSVCCompiler
if not self.initialized:
self.initialize()
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs)
compile_opts = extra_preargs or []
compile_opts.append('/c')
if debug:
compile_opts.extend(self.compile_options_debug)
else:
compile_opts.extend(self.compile_options)
def _single_compile(obj):
try:
src, ext = build[obj]
except KeyError:
return
input_opt = "/Tp" + src
output_opt = "/Fo" + obj
try:
self.spawn(
[clcache_exe]
+ compile_opts
+ pp_opts
+ [input_opt, output_opt]
+ extra_postargs)
except DistutilsExecError, msg:
raise CompileError(msg)
# convert to list, imap is evaluated on-demand
list(multiprocessing.pool.ThreadPool(N_JOBS).imap(
_single_compile, objects))
return objects
#------------------------------------------------------------------------------
# Only enable parallel compile on 2.7 Python
if sys.version_info[1] == 7:
if ON_LINUX:
distutils.ccompiler.CCompiler.compile = linux_parallel_cpp_compile
else:
import distutils.msvccompiler
import distutils.msvc9compiler
distutils.msvccompiler.MSVCCompiler.compile = windows_parallel_cpp_compile
distutils.msvc9compiler.MSVCCompiler.compile = windows_parallel_cpp_compile
# ... call setup() as usual
You can do this easily if you have Numpy 1.10 available. Just add:
try:
from numpy.distutils.ccompiler import CCompiler_compile
import distutils.ccompiler
distutils.ccompiler.CCompiler.compile = CCompiler_compile
except ImportError:
print("Numpy not found, parallel compile not available")
Use -j N or set NPY_NUM_BUILD_JOBS.
In the limited examples you provided in the link, it seems fairly obvious that you have some misunderstanding on what some of the features of the language are. For example, the gsminterface.h has a whole lot of namespace level statics, which is probably unintended. Every translation unit that includes that header will compile it's own version for everyone of the symbols declared in that header. Side effects of this are not only the compile time but also code bloat (larger binaries) and link time as the linker needs to process all those symbols.
There are still many questions that affect the build process that you have not answered, for example, whether you clean every time before you recompile. If you are doing that, then you might want to consider ccache, which is a tool that caches the result of the build process, so that if you run make clean; make target only the preprocessor will be run for any translation unit that has not changed. Note that as long as you keep maintaining most code in headers, this will not offer much of an advantage, as a change in a header modifies all translation units that include it. (I don't know your build system, so I cannot tell you whether python setup.py build will clean or not)
The project does not seem large otherwise, so I would be surprised if it took more than a few seconds to compile.

Gstreamer include error in waf. gst/gst.h: No such > file or directory

I am trying to build a Gstreamer program using waf.I am having some trouble including gstream files with waf.
I am getting an error.
[ 4/37] qxx: test/Playback/GSTEngine.cpp ->
build/test/Playback/GSTEngine.cpp.4.o
../test/Playback/GSTEngine.cpp:1:21: fatal error: gst/gst.h: No such
file or directory
my wscript(waf) file
top = '.'
out = 'build'
def options(opt):
opt.load('compiler_cxx qt4 compiler_c')
#opt.recurse(subdirs)
def configure(conf):
conf.load('compiler_cxx qt4 compiler_c boost')
conf.check_cfg(atleast_pkgconfig_version='0.0.0')
gstreamer_version = conf.check_cfg(modversion='gstreamer-0.10', mandatory=True)
conf.check_cfg(package='gstreamer-0.10')
conf.check_cfg(package='gstreamer-0.10', uselib_store='MYGSTREAMER', mandatory=True)
program_options')
conf.env.append_value('CXXFLAGS', ['-DWAF=1']) # test
def build(bld):
cxxflags = bld.env.commonCxxFlags
uselibcommon = 'QTMAIN QTCORE QTGUI QTOPENGL QTSVG QWIDGET QTSQL QTUITOOLS QTSCRIPT gstreamer-0.10'
bld(features = 'qt4 cxx', includes = '.', source = 'Util.cpp' , target = 'Util.o', uselib = uselibcommon, cxxflags=cxxflags)
bld(features = 'qt4 cxx', includes = '.', source = 'MetaData.cpp' , target = 'MetaData.o', uselib = uselibcommon, cxxflags=cxxflags)
bld(features = 'qt4 cxx', includes = '.', source = 'id3.cpp' , target = 'id3.o', uselib = uselibcommon, cxxflags=cxxflags)
use = [ 'MetaData.o', 'Util.o' , 'id3.o']
bld(features = 'qt4 cxx', includes = '.' , source = 'GSTEngine.cpp' , target = 'GSTEngine.o', uselib = uselibcommon, use = use, lib = ['gstreamer-0.10'], libpath = ['/usr/lib'], cxxflags=cxxflags)
from waflib.TaskGen import feature, before_method, after_method
#feature('cxx')
#after_method('.')
#before_method('apply_incpaths')
def add_includes_paths(self):
incs = set(self.to_list(getattr(self, 'includes', '')))
for x in self.compiled_tasks:
incs.add(x.inputs[0].parent.path_from(self.path))
self.includes = list(incs)
If i try include this list into the include parameter in GSTEngine.cpp bld statement.It works and goes for the next file.
['/usr/include/gstreamer-0.10','/usr/include/glib-2.0',
'/usr/lib/x86_64-linux-gnu/glib-2.0/include/',
'/usr/include/libxml2/']
I am new to waf and like to know how can i tell waf to take all gstreamer dependent include files.
hope you can help me,Thankz.
Well i got the solution
I added this line in conf part of waf
conf.check_cfg(atleast_pkgconfig_version='0.0.0')
conf.check_cfg(package='gstreamer-0.10', uselib_store='GSTREAMER', args='--cflags --libs', mandatory=True)
conf.check_cfg(package='taglib', uselib_store='TAGLIB', args='--cflags --libs', mandatory=True)
well pkg-config finds the lib and stores it in the uselib variable(uselib_store='TAGLIB'). so just add the variable to uselib in build part of waf
bld(features = 'qt4 cxx cxxprogram', includes = include, source =
'main.cpp MasterDetail.qrc', target = 'app', uselib = 'TAGLIB GSTREAMER' ,
cxxflags=cxxflags, use = use, subsystem='windows', linkflags=linkflags)

C++ Why can't the linker see my files?

Building a native module for Node.js under Cygwin / Windows:
I have a monkey.cc file with this:
#include <monkey/monkey.h>
running
node-waf configure build
I get the following
'configure' finished successfully (0.351s)
Waf: Entering directory `/usr/src/build'
[2/2] cxx_link: build/default/monkey_1.o -> build/default/monkey.node build/default/libmonkey.dll.a
Creating library file: default/libmonkey.dll.a
then the following error:
default/monkey_1.o:/usr/src/build/../monkey.cc:144: undefined reference to `_monkeyFoo'
monkeyFoo is defined in monkey.h which is in a directory named monkey. I am running the above command from the directory containing monkey directory and monkey.cc file.
EDIT:
wscript, which is the python script that node-waf runs looks like this:
import os
srcdir = '.'
blddir = './build'
VERSION = '0.0.2'
def set_options(opt):
opt.tool_options('compiler_cxx')
def configure(conf):
conf.check_tool('compiler_cxx')
conf.check_tool('node_addon')
def build(bld):
monkey = bld.new_task_gen('cxx', 'shlib', 'node_addon')
monkey.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall", "-L/usr/lib", "-lssl"]
monkey.chmod = 0755
monkey.target = 'monkey'
monkey.source = 'monkey.cc'
What am I missing???
That's a linker error, not a compiler error. Do you have a definition for the function? (Not just a declaration.) And are you sure it's being linked in?
Add monkey.lib='crypto' in the wscript.

How can I make a 'wscript' file that compiles a c file with one set of cxxflags, and a cpp with a different set of cxxflags

I am trying to compile a NodeJS native module, using two files: 1 .c file and 1 .cpp file. Here's what my 'wscript' file looks like:
def set_options(opt):
opt.tool_options("compiler_cxx")
def configure(conf):
conf.check_tool("compiler_cxx")
conf.check_tool("node_addon")
def build(bld):
obj = bld.new_task_gen("cxx")
obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall", "-x", "objective-c"]
obj.source = "c-file.c"
obj = bld.new_task_gen("cxx", "shlib", "node_addon")
obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall"]
obj.target = "binding"
obj.source = "cpp-file.cc"
This builds me a binding.node file which I can then partially use in Node, but as soon as I call the function that is located in the C file (the one compiled first in the wscript above), Node crashes with something like:
dyld: lazy symbol binding failed: Symbol not found: __Z9getSomethingv
Referenced from: /Users/nrajlich/test/build/default/binding.node
Expected in: flat namespace
This leads me to believe that the first file isn't being included in the linking phase, but I'm just not sure how I'm supposed to add it. Any ideas? Thanks in advance!
I don't think what I was originally trying to do is possible with waf.
In this case, I was trying to compile C++ and Obj-C code together and use them together. Originally I had the Obj-C code in the .c files and the C++ code in the .cc files, and was trying to pass different flags for the different file types.
Since then, I've learned that Obj-C and C++ may be combined in the same file! And I just needed to add an -ObjC++ flag to g++. This was easily done in waf, and also allowed to to consolidate all the needed stuff into a single file. Everything works great from there.
So I was able to solve what I originally was trying to do, however the original question I asked I don't believe is possible to do with waf. Cheers!
def set_options(opt):
opt.tool_options("compiler_cxx")
def configure(conf):
conf.check_tool("compiler_cxx")
conf.check_tool("node_addon")
def build(bld):
obj = bld.new_task_gen("cxx", "shlib", "node_addon")
obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall"]
obj.target = "binding"
obj.source = """
cpp-file.cc
c-file.c
"""
That's how you should do this.