gthread.h size of array is negative - c++

I have following configuration in my .pro file
INCLUDEPATH += /home/vickey/ossbuild-read-only/Shared/Build/Linux/x86/include/glib-2.0/
CONFIG += link_pkgconfig
PKGCONFIG += gstreamer-0.10
LIBS += -L/usr/lib `pkg-config --cflags --libs gstreamer-0.10`
LIBS += -L. -L/usr/lib -lphonon -lcurl -ltag -fopenmp -lsayonara_gstreamer
When I try to build the project I get the following error
/home/vickey/src/player/../../../../ossbuild-read-only/Shared/Build/Linux/x86/include/glib-2.0/glib/gthread.h:-1: In function 'gboolean g_once_init_enter(volatile gsize*)':
/home/vickey/src/player/../../../../ossbuild-read-only/Shared/Build/Linux/x86/include/glib-2.0/glib/gthread.h:348: error: size of array is negative
Double clicking on error takes me to gthread.h file with below lines pointed
g_once_init_enter (volatile gsize *value_location)
{
if G_LIKELY ((gpointer) g_atomic_pointer_get (value_location) != NULL)
return FALSE;
else
return g_once_init_enter_impl (value_location);
}
what seems to be the problem ?

Had the same error compiling ancient glib and pango for a 64-bit platform.
Here's how g_atomic_pointer_get source looks in that version:
# define g_atomic_pointer_get(atomic) \
((void) sizeof (gchar [sizeof (*(atomic)) == sizeof (gpointer) ? 1 : -1]), \
(g_atomic_pointer_get) ((volatile gpointer G_GNUC_MAY_ALIAS *) (volatile void *) (atomic)))
So, here atomic is gsize, which must have the same sizeof as gpointer, i.e. void*.
It helped me to redefine gsize and gssize to be 8-byte on 64bit architecture in glibconfig.h.
Also update GLIB_SIZEOF_VOID_P, GLIB_SIZEOF_LONG and GLIB_SIZEOF_SIZE_T.

Related

How do I distinguish `&&` from `and` in llvm-clang?

It seems like to clang, binary operator && is exactly the same as and - its alternative operator representation. In the ast, both end up as BinaryOperator [...] 'bool' '&&'. Is there a way to distinguish them nevertheless?
I was hoping to be able to retrieve the actual source code string but have not been able to do so yet.
I am trying to do this using clang-tidy while writing a check that suggests using and instead of &&. I looked at clang::ASTContext, but didn't find anything that would get me anywhere.
As you have discovered, the AST itself does not distinguish between the
two ways to spell the operator. However, the AST has source location
information that can be used to get the original text. The idea is to
use:
clang::BinaryOperator::getOperatorLoc()
to get the start location of the operator, then
clang::Lexer::getLocForEndOfToken()
to get its end location, then
clang::Lexer::getSourceText()
to get the operator text.
Note that when the operator results from a macro expansion, the source
retrieval method I am using does not work well, and I do not know if
that can be fixed.
Here is the core of a clang-tidy check that does what you want (except
for cases involving macros):
// https://stackoverflow.com/questions/11083066/getting-the-source-behind-clangs-ast
std::string get_source_text_raw(SourceRange range, SourceManager const &sm)
{
return std::string(
clang::Lexer::getSourceText(
clang::CharSourceRange::getCharRange(range), sm, clang::LangOptions()
)
);
}
void FindOpAnd::registerMatchers(ast_matchers::MatchFinder *Finder)
{
// Match use of operator '&&' or 'and'.
Finder->addMatcher(
binaryOperator(hasOperatorName("&&"))
.bind("op"), this);
}
void FindOpAnd::check(const MatchFinder::MatchResult &Result)
{
ASTContext &context = *(Result.Context);
SourceManager &sm = context.getSourceManager();
// Get the node bound to "op" in the match expression.
const auto *opExpr = Result.Nodes.getNodeAs<BinaryOperator>("op");
// Location of the start of the operator.
SourceLocation opLoc = opExpr->getOperatorLoc();
// Try to find the location of the end.
SourceLocation opEndLoc =
clang::Lexer::getLocForEndOfToken(opLoc, 0, sm, clang::LangOptions());
if (opEndLoc.isValid()) {
// Get the text of the operator.
SourceRange opRange(opLoc, opEndLoc);
std::string opText = get_source_text_raw(opRange, sm);
if (opText == "&&") {
diag(opLoc, "Using '&&' instead of 'and'.")
<< FixItHint::CreateReplacement(opRange, "and");
}
else if (opText == "and") {
// This what we want.
}
else {
// This happens when the operator itself is the result of a macro
// expansion.
}
}
else {
// The end location will not be found if the operator was inside
// the expansion of a macro.
}
}
Sample input:
// op-and.cc
// Test input for clang-tidy FindOpAnd checker.
// Note: There are no digraphs or trigraphs for '&', so that is not a
// concern here.
#define MY_AMPER_OPERATOR &&
#define MY_AND_OPERATOR and
#define MY_AMPER_EXPR(a,b) ((a) && (b))
#define MY_AND_EXPR(a,b) ((a) and (b))
void f(bool a, bool b)
{
bool r;
r = a && b; // Reported.
r = a and b; // Not reported.
// Not reported: Operator name not recognized.
r = a MY_AMPER_OPERATOR b;
r = a MY_AND_OPERATOR b;
// Not reported: Operator end location is invalid so we cannot get the
// operator name.
r = MY_AMPER_EXPR(a,b);
r = MY_AND_EXPR(a,b);
}
// EOF
Sample output:
$ ./FindOpAnd.exe -checks=-*,FindOpAnd in/op-and.cc \
--export-fixes=out/op-and.cc.fixes.yaml --
1 warning generated.
[...]/in/op-and.cc:17:9: warning: Using '&&' instead of 'and'. [FindOpAnd]
r = a && b; // Reported.
^~
and
To actually apply the suggested fixes, pass the --fix option.
For completeness, here is the full checker source and its Makefile:
// FindOpAnd.cc
// Code for FindOpAnd.h.
#include "FindOpAnd.h" // this module
#include "clang/AST/ASTContext.h" // ASTContext
#include "clang/Lex/Lexer.h" // clang::Lexer
#include "clang-tidy/ClangTidyModule.h" // ClangTidyModule
#include "clang-tidy/ClangTidyModuleRegistry.h" // ClangTidyModuleRegistry
using namespace clang::ast_matchers;
namespace clang {
namespace tidy {
// https://stackoverflow.com/questions/11083066/getting-the-source-behind-clangs-ast
std::string get_source_text_raw(SourceRange range, SourceManager const &sm)
{
return std::string(
clang::Lexer::getSourceText(
clang::CharSourceRange::getCharRange(range), sm, clang::LangOptions()
)
);
}
void FindOpAnd::registerMatchers(ast_matchers::MatchFinder *Finder)
{
// Match use of operator '&&' or 'and'.
Finder->addMatcher(
binaryOperator(hasOperatorName("&&"))
.bind("op"), this);
}
void FindOpAnd::check(const MatchFinder::MatchResult &Result)
{
ASTContext &context = *(Result.Context);
SourceManager &sm = context.getSourceManager();
// Get the node bound to "op" in the match expression.
const auto *opExpr = Result.Nodes.getNodeAs<BinaryOperator>("op");
// Location of the start of the operator.
SourceLocation opLoc = opExpr->getOperatorLoc();
// Try to find the location of the end.
SourceLocation opEndLoc =
clang::Lexer::getLocForEndOfToken(opLoc, 0, sm, clang::LangOptions());
if (opEndLoc.isValid()) {
// Get the text of the operator.
SourceRange opRange(opLoc, opEndLoc);
std::string opText = get_source_text_raw(opRange, sm);
if (opText == "&&") {
diag(opLoc, "Using '&&' instead of 'and'.")
<< FixItHint::CreateReplacement(opRange, "and");
}
else if (opText == "and") {
// This what we want.
}
else {
// This happens when the operator itself is the result of a macro
// expansion.
}
}
else {
// The end location will not be found if the operator was inside
// the expansion of a macro.
}
}
class FindOpAndModule : public ClangTidyModule {
public:
void addCheckFactories(ClangTidyCheckFactories &CheckFactories) override {
CheckFactories.registerCheck<FindOpAnd>("FindOpAnd");
}
};
static ClangTidyModuleRegistry::Add<FindOpAndModule> X(
"FindOpAndModule",
"Adds FindOpAnd check.");
// This is defined in libclangTidyMain.a. It does not appear to be
// declared in any header file, so I doubt this is really how it is
// meant to be used.
int clangTidyMain(int argc, const char **argv);
} // namespace tidy
} // namespace clang
int main(int argc, const char **argv)
{
return clang::tidy::clangTidyMain(argc, argv);
}
// EOF
// FindOpAnd.h
// clang-tidy check to find operator '&&' as opposed to 'and'.
#ifndef FIND_OP_AND_H
#define FIND_OP_AND_H
#include "clang-tidy/ClangTidyCheck.h" // ClangTidyCheck
#include "clang/ASTMatchers/ASTMatchFinder.h" // ast_matchers::MatchFinder
namespace clang {
namespace tidy {
class FindOpAnd : public ClangTidyCheck {
public:
FindOpAnd(StringRef Name, ClangTidyContext *Context)
: ClangTidyCheck(Name, Context) {}
void registerMatchers(ast_matchers::MatchFinder *Finder) override;
void check(const ast_matchers::MatchFinder::MatchResult &Result) override;
};
} // namespace tidy
} // namespace clang
#endif // FIND_OP_AND_H
# clang-tidy-op-and/Makefile
# Default target.
all:
.PHONY: all
# Eliminate all implicit rules.
.SUFFIXES:
# Delete a target when its recipe fails.
.DELETE_ON_ERROR:
# Do not remove "intermediate" targets.
.SECONDARY:
# ---- Paths ----
# Installation directory from a binary distribution.
# Has five subdirectories: bin include lib libexec share.
# Downloaded from: https://github.com/llvm/llvm-project/releases/download/llvmorg-14.0.0/clang+llvm-14.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
CLANG_LLVM_INSTALL_DIR = $(HOME)/opt/clang+llvm-14.0.0-x86_64-linux-gnu-ubuntu-18.04
# Program to query the various LLVM configuration options.
LLVM_CONFIG = $(CLANG_LLVM_INSTALL_DIR)/bin/llvm-config
# ---- Compiler options ----
# C++ compiler.
CXX = g++
# Compiler options, including preprocessor options.
CXXFLAGS =
CXXFLAGS += -Wall
# Get llvm compilation flags.
CXXFLAGS += $(shell $(LLVM_CONFIG) --cxxflags)
# Linker options.
LDFLAGS =
# Needed libraries. The order is important. I do not know a principled
# way to obtain this list. I did it by chasing down each missing symbol
# in a link error.
LDFLAGS += -lclangTidy
LDFLAGS += -lclangTidyMain
LDFLAGS += -lclangTidyPlugin
LDFLAGS += -lclangToolingCore
LDFLAGS += -lclangFormat
LDFLAGS += -lclangToolingInclusions
LDFLAGS += -lclangTidyAbseilModule
LDFLAGS += -lclangTidyAlteraModule
LDFLAGS += -lclangTidyAndroidModule
LDFLAGS += -lclangTidyBoostModule
LDFLAGS += -lclangTidyBugproneModule
LDFLAGS += -lclangTidyCERTModule
LDFLAGS += -lclangTidyConcurrencyModule
LDFLAGS += -lclangTidyCppCoreGuidelinesModule
LDFLAGS += -lclangTidyDarwinModule
LDFLAGS += -lclangTidyFuchsiaModule
LDFLAGS += -lclangTidyGoogleModule
LDFLAGS += -lclangTidyHICPPModule
LDFLAGS += -lclangTidyLLVMLibcModule
LDFLAGS += -lclangTidyLLVMModule
LDFLAGS += -lclangTidyLinuxKernelModule
LDFLAGS += -lclangTidyMPIModule
LDFLAGS += -lclangTidyMiscModule
LDFLAGS += -lclangTidyModernizeModule
LDFLAGS += -lclangTidyObjCModule
LDFLAGS += -lclangTidyOpenMPModule
LDFLAGS += -lclangTidyPerformanceModule
LDFLAGS += -lclangTidyPortabilityModule
LDFLAGS += -lclangTidyReadabilityModule
LDFLAGS += -lclangTidyZirconModule
LDFLAGS += -lclangTidyUtils
LDFLAGS += -lclangTransformer
LDFLAGS += -lclangTooling
LDFLAGS += -lclangFrontendTool
LDFLAGS += -lclangFrontend
LDFLAGS += -lclangDriver
LDFLAGS += -lclangSerialization
LDFLAGS += -lclangCodeGen
LDFLAGS += -lclangParse
LDFLAGS += -lclangSema
LDFLAGS += -lclangStaticAnalyzerFrontend
LDFLAGS += -lclangStaticAnalyzerCheckers
LDFLAGS += -lclangStaticAnalyzerCore
LDFLAGS += -lclangAnalysis
LDFLAGS += -lclangARCMigrate
LDFLAGS += -lclangRewrite
LDFLAGS += -lclangRewriteFrontend
LDFLAGS += -lclangEdit
LDFLAGS += -lclangCrossTU
LDFLAGS += -lclangIndex
LDFLAGS += -lclangAST
LDFLAGS += -lclangASTMatchers
LDFLAGS += -lclangLex
LDFLAGS += -lclangBasic
LDFLAGS += -lclang
# *After* clang libs, the llvm libs.
LDFLAGS += $(shell $(LLVM_CONFIG) --ldflags --libs --system-libs)
# ---- Recipes ----
# Pull in automatic dependencies.
-include $(wildcard obj/*.d)
# Compile a C++ source file.
obj/%.o: %.cc
#mkdir -p $(dir $#)
$(CXX) -MMD -c -o $# $(USE_PCH) $< $(CXXFLAGS)
# Sources for 'FindOpAnd.exe'.
SRCS :=
SRCS += FindOpAnd.cc
# Objects for 'FindOpAnd.exe'.
OBJS := $(patsubst %.cc,obj/%.o,$(SRCS))
# Executable.
all: FindOpAnd.exe
FindOpAnd.exe: $(OBJS)
$(CXX) -g -Wall -o $# $(OBJS) $(LDFLAGS)
# Run program on one input.
out/%: in/% FindOpAnd.exe
#mkdir -p $(dir $#)
./FindOpAnd.exe -checks=-*,FindOpAnd in/$* \
--export-fixes=out/$*.fixes.yaml \
-- </dev/null 2>&1 | cat
touch $#
# Run tests.
.PHONY: check
check: FindOpAnd.exe
check: out/op-and.cc
# Remove test outputs.
.PHONY: check-clean
check-clean:
rm -rf out
# Remove compile and test outputs.
.PHONY: clean
clean: check-clean
$(RM) *.exe
rm -rf obj
# EOF

Dispatching SIMD instructions + SIMDPP + qmake

I'm developing a QT widget that makes use of SIMD instruction sets. I've compiled 3 versions: SSE3, AVX, and AVX2(simdpp allows to switch between them by a single #define).
Now, what I want is for my widget to switch automatically between these implementations, according to best supported instruction set. Guide that is provided with simdpp makes use of some makefile magic:
CXXFLAGS=""
test: main.o test_sse2.o test_sse3.o test_sse4_1.o test_null.o
g++ $^ -o test
main.o: main.cc
g++ main.cc $(CXXFLAGS) -c -o main.o
test_null.o: test.cc
g++ test.cc -c $(CXXFLAGS) -DSIMDPP_EMIT_DISPATCHER \
-DSIMDPP_DISPATCH_ARCH1=SIMDPP_ARCH_X86_SSE2 \
-DSIMDPP_DISPATCH_ARCH2=SIMDPP_ARCH_X86_SSE3 \
-DSIMDPP_DISPATCH_ARCH3=SIMDPP_ARCH_X86_SSE4_1 -o test_null.o
test_sse2.o: test.cc
g++ test.cc -c $(CXXFLAGS) -DSIMDPP_ARCH_X86_SSE2 -msse2 -o test_sse2.o
test_sse3.o: test.cc
g++ test.cc -c $(CXXFLAGS) -DSIMDPP_ARCH_X86_SSE3 -msse3 -o test_sse3.o
test_sse4_1.o: test.cc
g++ test.cc -c $(CXXFLAGS) -DSIMDPP_ARCH_X86_SSE4_1 -msse4.1 -o test_sse4_1.o
Here is a link to the guide: http://p12tic.github.io/libsimdpp/v2.0~rc2/libsimdpp/arch/dispatch.html
I have no idea how to implement such behavior with qmake. Any ideas?
First that comes to mind is to create a shared library with dispatched code, and link it to the project. Here I'm stuck again. App is cross-platform, which means it has to compile with both GCC and MSVC(vc120, to be exact), which forces using nmake in Windows, and I tried, really, but it was like the worst experience in my whole programmer life.
Thanks in advance, programmers of the world!
sorry if this is a bit late. Hope I can still help.
You need to consider 2 areas: Compile time and run time.
Compile time - need to create code to support different features.
Run time - need to create code to decide which features you can run.
What you are wanting to do is create a dispatcher...
FuncImpl.h:
#pragma once
void execAvx2();
void execAvx();
void execSse();
void execDefault();
FuncImpl.cpp:
// Compile this file once for each variant with different compiler settings.
#if defined(__AVX2__)
void execAvx2()
{
// AVX2 impl
...
}
#elif defined (__AVX__)
void execAvx()
{
// AVX impl
...
}
#elif defined (__SSE4_2__)
void execSse()
{
// Sse impl
...
}
#else
void execDefault()
{
// Vanilla impl
...
}
#endif
DispatchFunc.cpp
#include "FuncImpl.h"
// Decide at runtime which code to run
void dispatchFunc()
{
if(CheckCpuAvx2Flag())
{
execAvx2();
}
else if(CheckCpuAvxFlag())
{
execAvx();
}
else if(CheckCpuSseFlags())
{
execSse();
}
else
{
execDefault();
}
}
What you can do is create a set of QMAKE_EXTRA_COMPILERS.
SampleCompiler.pri (Do this for each variant):
MyCompiler.name = MyCompiler # Name
MyCompiler.input = MY_SOURCES # Symbol of the source list to compile
MyCompiler.dependency_type = TYPE_C
MyCompiler.variable_out = OBJECTS
# EXTRA_CXXFLAGS = -mavx / -mavx2 / -msse4.2
# _var = creates FileName_var.o => replace with own variant (_sse, etc)
MyCompiler.output = ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_IN_BASE}_var$${first(QMAKE_EXT_OBJ)}
MyCompiler.commands = $${QMAKE_CXX} $(CXXFLAGS) $${EXTRA_CXXFLAGS} $(INCPATH) -c ${QMAKE_FILE_IN} -o${QMAKE_FILE_OUT}
QMAKE_EXTRA_COMPILERS += MyCompiler # Add my compiler
MyProject.pro
...
include(SseCompiler.pri)
include(AvxCompiler.pri)
include(Avx2Compiler.pri)
..
# Normal sources
# Will create FuncImpl.o and DispatchFunc.o
SOURCES += FuncImpl.cpp \
DispatchFunc.cpp
# Give the other compilers their sources
# Will create FuncImpl_avx2.o FuncImpl_avx.o FuncImpl_sse.o
AVX2_SOURCES += FuncImpl.cpp
AVX_SOURCES += FuncImpl.cpp
SSE_SOURCES += FuncImpl.cpp
# Link all objects
...
All you need now is to call dispatchFunc()!
Checking cpu flags is another exercise for you:
cpuid
These are just project defines. You set them with DEFINES += in your .pro file.You set the flags for the instructions sets you want to support and simdpp takes care of selecting the best one for the processor at runtime.
See for example, Add a define to qmake WITH a value?
Here is a qmake .pro file for use with SIMD dispatchers. It is quite verbose, so for more instruction sets, it is better to generate the dispatched blocks by a script, write it to a .pri file and then include it from your main .pro file.
TEMPLATE = app
TARGET = simd_test
INCLUDEPATH += .
QMAKE_CXXFLAGS = -O3 -std=c++17
SOURCES += main.cpp
SOURCES_dispatch = test.cpp
{
# SSE2
DISPATCH_CXXFLAGS = -msse2
DISPATCH_SUFFIX = _sse2
src_dispatch_sse2.name = src_dispatch_sse2
src_dispatch_sse2.input = SOURCES_dispatch
src_dispatch_sse2.dependency_type = TYPE_C
src_dispatch_sse2.variable_out = OBJECTS
src_dispatch_sse2.output = ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_IN_BASE}$${DISPATCH_SUFFIX}$${first(QMAKE_EXT_OBJ)}
src_dispatch_sse2.commands = $${QMAKE_CXX} $(CXXFLAGS) $${DISPATCH_CXXFLAGS} $(INCPATH) -c ${QMAKE_FILE_IN} -o ${QMAKE_FILE_OUT}
QMAKE_EXTRA_COMPILERS += src_dispatch_sse2
}
{
# SSE3
DISPATCH_CXXFLAGS = -msse3
DISPATCH_SUFFIX = _sse3
src_dispatch_sse3.name = src_dispatch_sse3
src_dispatch_sse3.input = SOURCES_dispatch
src_dispatch_sse3.dependency_type = TYPE_C
src_dispatch_sse3.variable_out = OBJECTS
src_dispatch_sse3.output = ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_IN_BASE}$${DISPATCH_SUFFIX}$${first(QMAKE_EXT_OBJ)}
src_dispatch_sse3.commands = $${QMAKE_CXX} $(CXXFLAGS) $${DISPATCH_CXXFLAGS} $(INCPATH) -c ${QMAKE_FILE_IN} -o ${QMAKE_FILE_OUT}
QMAKE_EXTRA_COMPILERS += src_dispatch_sse3
}
{
# SSE41
DISPATCH_CXXFLAGS = -msse4.1
DISPATCH_SUFFIX = _sse41
src_dispatch_sse41.name = src_dispatch_sse41
src_dispatch_sse41.input = SOURCES_dispatch
src_dispatch_sse41.dependency_type = TYPE_C
src_dispatch_sse41.variable_out = OBJECTS
src_dispatch_sse41.output = ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_IN_BASE}$${DISPATCH_SUFFIX}$${first(QMAKE_EXT_OBJ)}
src_dispatch_sse41.commands = $${QMAKE_CXX} $(CXXFLAGS) $${DISPATCH_CXXFLAGS} $(INCPATH) -c ${QMAKE_FILE_IN} -o ${QMAKE_FILE_OUT}
QMAKE_EXTRA_COMPILERS += src_dispatch_sse41
}

Sphinx Ubuntu 14 c++ sphinx_config.h not found

I have installed PocketSphinx on Ubuntu 14 and now trying to create simple sample. I took code from official website Sphinx.
#include <pocketsphinx.h>
int
main(int argc, char *argv[])
{
ps_decoder_t *ps;
cmd_ln_t *config;
FILE *fh;
char const *hyp, *uttid;
int16 buf[512];
int rv;
int32 score;
config = cmd_ln_init(NULL, ps_args(), TRUE,
"-hmm", MODELDIR "/en-us/en-us",
"-lm", MODELDIR "/en-us/en-us.lm.dmp",
"-dict", MODELDIR "/en-us/cmudict-en-us.dict",
NULL);
if (config == NULL)
return 1;
ps = ps_init(config);
if (ps == NULL)
return 1;
fh = fopen("goforward.raw", "rb");
if (fh == NULL)
return -1;
rv = ps_start_utt(ps);
if (rv < 0)
return 1;
while (!feof(fh)) {
size_t nsamp;
nsamp = fread(buf, 2, 512, fh);
rv = ps_process_raw(ps, buf, nsamp, FALSE, FALSE);
}
rv = ps_end_utt(ps);
if (rv < 0)
return 1;
hyp = ps_get_hyp(ps, &score);
if (hyp == NULL)
return 1;
printf("Recognized: %s\n", hyp);
fclose(fh);
ps_free(ps);
cmd_ln_free_r(config);
return 0;
}
And Qmake is
QT += core
QT -= gui
TARGET = OpenCVQt
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
DEPENDPATH += /usr/local/lib
INCLUDEPATH += /usr/local/include
INCLUDEPATH += /usr/local/include/pocketsphinx
INCLUDEPATH += /usr/local/include/sphinxbase
LIBS += -lopencv_core
LIBS += -lopencv_imgproc
LIBS += -lopencv_highgui
LIBS +=-lpocketsphinx
LIBS += -lsphinxbase
LIBS += -lsphinxad
SOURCES += main.cpp
Can't understand what is wrong. I saw sphinx_config.h in /usr/local/include/sphinxbase. Thanks.
18:54:06: Starting: "/usr/bin/make"
/home/warezovvv/Qt/5.4/gcc_64/bin/qmake -spec linux-g++ CONFIG+=debug -o Makefile ../OpenCVQt/OpenCVQt.pro
g++ -c -pipe -g -Wall -W -D_REENTRANT -fPIE -DQT_CORE_LIB -I../OpenCVQt -I. -I/usr/local/include -I/usr/local/include/pocketsphinx -I/usr/local/include/sphinxbase -I../../../Qt/5.4/gcc_64/include -I../../../Qt/5.4/gcc_64/include/QtCore -I. -I../../../Qt/5.4/gcc_64/mkspecs/linux-g++ -o main.o ../OpenCVQt/main.cpp
In file included from /usr/include/sphinxbase/cmd_ln.h:66:0,
from /usr/local/include/pocketsphinx/pocketsphinx.h:52,
from ../OpenCVQt/main.cpp:1:
/usr/include/sphinxbase/prim_type.h:88:27: fatal error: sphinx_config.h: No such file or directory
#include <sphinx_config.h>
^
compilation terminated.
make: *** [main.o] Error 1
18:54:06: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project OpenCVQt (kit: Desktop Qt 5.4.1 GCC 64bit)
When executing step "Make"
18:54:06: Elapsed time: 00:01.
No error with header. Now new error ->
22:42:41: Running steps for project OpenCVQt...
22:42:41: Configuration unchanged, skipping qmake step.
22:42:41: Starting: "/usr/bin/make"
/home/warezovvv/Qt/5.4/gcc_64/bin/qmake -spec linux-g++ CONFIG+=debug -o Makefile ../OpenCVQt/OpenCVQt.pro
g++ -c -pipe -g -Wall -W -D_REENTRANT -fPIE -DQT_CORE_LIB -I../OpenCVQt -I. -I/usr/local/include -I/usr/local/include/pocketsphinx -I/usr/local/include/sphinxbase -I/usr/include/sphinxbase -I../../../Qt/5.4/gcc_64/include -I../../../Qt/5.4/gcc_64/include/QtCore -I. -I../../../Qt/5.4/gcc_64/mkspecs/linux-g++ -o main.o ../OpenCVQt/main.cpp
../OpenCVQt/main.cpp: In function 'int main(int, char**)':
../OpenCVQt/main.cpp:14:22: error: 'MODELDIR' was not declared in this scope
"-hmm", MODELDIR "/en-us/en-us",
^
../OpenCVQt/main.cpp:15:30: error: expected ')' before string constant
"-lm", MODELDIR "/en-us/en-us.lm.dmp",
^
../OpenCVQt/main.cpp:16:32: error: expected ')' before string constant
"-dict", MODELDIR "/en-us/cmudict-en-us.dict",
^
../OpenCVQt/main.cpp:9:23: warning: unused variable 'uttid' [-Wunused-variable]
char const *hyp, *uttid;
^
../OpenCVQt/main.cpp: At global scope:
../OpenCVQt/main.cpp:4:1: warning: unused parameter 'argc' [-Wunused-parameter]
main(int argc, char *argv[])
^
../OpenCVQt/main.cpp:4:1: warning: unused parameter 'argv' [-Wunused-parameter]
make: *** [main.o] Error 1
22:42:42: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project OpenCVQt (kit: Desktop Qt 5.4.1 GCC 64bit)
When executing step "Make"
22:42:42: Elapsed time: 00:01.
I solved it with this CMake config:
cmake_minimum_required(VERSION 2.8)
add_definitions(-std=c++11)
find_package(PkgConfig REQUIRED)
pkg_check_modules(POCKETSPHINX REQUIRED pocketsphinx)
pkg_check_modules(SPHINXBASE REQUIRED sphinxbase)
message(STATUS "SPHINXBASE_LIBRARIES => " "${SPHINXBASE_LIBRARIES}")
message(STATUS "POCKETSPHINX_LIBRARIES => " "${POCKETSPHINX_LIBRARIES}")
message(STATUS "POCKETSPHINX_INCLUDE_DIRS => " "${POCKETSPHINX_INCLUDE_DIRS}")
message(STATUS "SPHINXBASE_INCLUDE_DIRS => " "${SPHINXBASE_INCLUDE_DIRS}")
set(
your_pocketsphinx_app_src
your_pocketsphinx_app.cpp
...
)
add_executable(your_pocketsphinx_app your_pocketsphinx_app.cpp)
set_property(TARGET your_pocketsphinx_app PROPERTY CXX_STANDARD 11)
target_include_directories(your_pocketsphinx_app PUBLIC ${POCKETSPHINX_INCLUDE_DIRS})
target_include_directories(your_pocketsphinx_app PUBLIC ${SPHINXBASE_INCLUDE_DIRS})
target_compile_options(your_pocketsphinx_app PUBLIC ${POCKETSPHINX_CFLAGS_OTHER})
target_compile_options(your_pocketsphinx_app PUBLIC ${SPHINXBASE_CFLAGS_OTHER})
target_link_libraries(your_pocketsphinx_app ${SPHINXBASE_LIBRARIES})
target_link_libraries(your_pocketsphinx_app ${POCKETSPHINX_LIBRARIES})
# Binary to be installed.
install(TARGETS your_pocketsphinx_app DESTINATION bin)
You seem to have installation of sphinxbase headers in /usr/include, not in /usr/local/include. If you intentionally compiled sphinxbase this way, you need to add -I/usr/include/sphinxbase in Makefile, not -I/usr/local/include/sphinxbase.
Overall pocketsphinx-5prealpha requires sphinxbase-5prealpha and default installation prefix for them is /usr/local. I suggest you to remove sphinxbase from /usr.

Qt Creator on Mac and boost libraries

I am running QtCreator on Mac... I want to start working on boost libraries ... So, I installed boost libraries using
brew install boost
After that I created a small boost hallo world program and made the changes in .pro file as follows
TEMPLATE = app
CONFIG += console
CONFIG -= app_bundle
CONFIG -= qt
unix:INCLUDEPATH += "/usr/local/Cellar/boost/1.55.0_1/include/"
unix:LIBPATH += "-L/usr/local/Cellar/boost/1.55.0_1/lib/"
SOURCES += main.cpp
LIBS += \
-lboost_date_time \
-lboost_filesystem \
-lboost_program_options \
-lboost_regex \
-lboost_signals \
-lboost_system
I am still unable to build... What could be the reason? Please suggest me what could be the possible mistake...
The errors are
library not found for -lboost_data_time
linker command failed with exit code 1 (use -v to see invocation)
This is taking a bit from Uflex's answer, as he missed something.
So keep the same code:
//make sure that there is a boost folder in your boost include directory
#include <boost/chrono.hpp>
#include <cmath>
int main()
{
auto start = boost::chrono::system_clock::now();
for ( long i = 0; i < 10000000; ++i )
std::sqrt( 123.456L ); // burn some time
auto sec = boost::chrono::system_clock::now() - start;
std::cout << "took " << sec.count() << " seconds" << std::endl;
return 0;
}
But lets change his .pro a bit:
TEMPLATE = app
CONFIG += console
CONFIG -= app_bundle
CONFIG -= qt
SOURCES += main.cpp
macx {
QMAKE_CXXFLAGS += -std=c++11
_BOOST_PATH = /usr/local/Cellar/boost/1.55.0_1
INCLUDEPATH += "$${_BOOST_PATH}/include/"
LIBS += -L$${_BOOST_PATH}/lib
## Use only one of these:
LIBS += -lboost_chrono-mt -lboost_system # using dynamic lib (not sure if you need that "-mt" at the end or not)
#LIBS += $${_BOOST_PATH}/lib/libboost_chrono-mt.a # using static lib
}
The only thing I have added to this was the boost system( -lboost_system )
That should solve the issue with his original version causing the undefined symbols, and allow you to add your other libraries.
Such as -lboost_date_time, which for me worked perfectly with the brew install.
Granted, my path is actually: /usr/local/Cellar/boost/1.55.0_2
Boost libraries are modularized, you just need to link against the libraries that you are using. Some libraries are header only, so you don't need to do anything, having boost reachable in your path is enough.
You can try to compile this:
//make sure that there is a boost folder in your boost include directory
#include <boost/chrono.hpp>
#include <cmath>
int main()
{
auto start = boost::chrono::system_clock::now();
for ( long i = 0; i < 10000000; ++i )
std::sqrt( 123.456L ); // burn some time
auto sec = boost::chrono::system_clock::now() - start;
std::cout << "took " << sec.count() << " seconds" << std::endl;
return 0;
}
And in the .pro file:
TEMPLATE = app
CONFIG += console
CONFIG -= app_bundle
CONFIG -= qt
SOURCES += main.cpp
macx {
QMAKE_CXXFLAGS += -std=c++11
_BOOST_PATH = /usr/local/Cellar/boost/1.55.0_1
INCLUDEPATH += "$${_BOOST_PATH}/include/"
LIBS += -L$${_BOOST_PATH}/lib
## Use only one of these:
LIBS += -lboost_chrono-mt # using dynamic lib (not sure if you need that "-mt" at the end or not)
#LIBS += $${_BOOST_PATH}/lib/libboost_chrono-mt.a # using static lib
}

Compiling Cuda code in Qt Creator on Windows

I have been trying for days to get a Qt project file running on a 32-bit Windows 7 system, in which I want/need to include Cuda code. This combination of things is either so simple that no one ever bothered to put an example online, or so difficult that nobody ever succeeded, it seems. Whatever way, the only helpful forum threads I found were the same issue on Linux or Mac, or with Visual Studio on a Windows.
All of these give all sorts of different errors, however, whether due to linking or clashing libraries, or spaces in file names or non-existing folders in the Windows version of the Cuda SDK.
Is there someone who has a clear .pro file to offer that does the trick?
I am aiming to compile a simple programme with ordinary C++ code in Qt style, with Qt 4.8 libraries, which reference several Cuda modules in .cu files. Something of the form:
TestCUDA \
TestCUDA.pro
main.cpp
test.cu
So I finally managed to assemble a .pro file that works on my and probably on all Windows systems. The following is an easy test programme that should probably do the trick. The following is a small project file plus test programme that works at least on my system.
The file system looks as follows:
TestCUDA \
TestCUDA.pro
main.cpp
vectorAddition.cu
The project file reads:
TARGET = TestCUDA
# Define output directories
DESTDIR = release
OBJECTS_DIR = release/obj
CUDA_OBJECTS_DIR = release/cuda
# Source files
SOURCES += src/main.cpp
# This makes the .cu files appear in your project
OTHER_FILES += vectorAddition.cu
# CUDA settings <-- may change depending on your system
CUDA_SOURCES += src/cuda/vectorAddition.cu
CUDA_SDK = "C:/ProgramData/NVIDIA Corporation/NVIDIA GPU Computing SDK 4.2/C" # Path to cuda SDK install
CUDA_DIR = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v4.2" # Path to cuda toolkit install
SYSTEM_NAME = Win32 # Depending on your system either 'Win32', 'x64', or 'Win64'
SYSTEM_TYPE = 32 # '32' or '64', depending on your system
CUDA_ARCH = sm_11 # Type of CUDA architecture, for example 'compute_10', 'compute_11', 'sm_10'
NVCC_OPTIONS = --use_fast_math
# include paths
INCLUDEPATH += $$CUDA_DIR/include \
$$CUDA_SDK/common/inc/ \
$$CUDA_SDK/../shared/inc/
# library directories
QMAKE_LIBDIR += $$CUDA_DIR/lib/$$SYSTEM_NAME \
$$CUDA_SDK/common/lib/$$SYSTEM_NAME \
$$CUDA_SDK/../shared/lib/$$SYSTEM_NAME
# Add the necessary libraries
LIBS += -lcuda -lcudart
# The following library conflicts with something in Cuda
QMAKE_LFLAGS_RELEASE = /NODEFAULTLIB:msvcrt.lib
QMAKE_LFLAGS_DEBUG = /NODEFAULTLIB:msvcrtd.lib
# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
# Configuration of the Cuda compiler
CONFIG(debug, debug|release) {
# Debug mode
cuda_d.input = CUDA_SOURCES
cuda_d.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
cuda_d.commands = $$CUDA_DIR/bin/nvcc.exe -D_DEBUG $$NVCC_OPTIONS $$CUDA_INC $$LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
cuda_d.dependency_type = TYPE_C
QMAKE_EXTRA_COMPILERS += cuda_d
}
else {
# Release mode
cuda.input = CUDA_SOURCES
cuda.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
cuda.commands = $$CUDA_DIR/bin/nvcc.exe $$NVCC_OPTIONS $$CUDA_INC $$LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
cuda.dependency_type = TYPE_C
QMAKE_EXTRA_COMPILERS += cuda
}
Note the QMAKE_LFLAGS_RELEASE = /NODEFAULTLIB:msvcrt.lib: it took me a long time to figure out, but this library seems to clash with other things in Cuda, which produces strange linking warnings and errors. If someone has an explanation for this, and potentially a prettier way to get around this, I'd like to hear it.
Also, since Windows file paths often include spaces (and NVIDIA's SDK by default does so too), it is necessary to artificially add quotation marks around the include paths. Again, if someone knows a more elegant way of solving this problem, I'd be interested to know.
The main.cpp file looks like this:
#include <cuda.h>
#include <builtin_types.h>
#include <drvapi_error_string.h>
#include <QtCore/QCoreApplication>
#include <QDebug>
// Forward declare the function in the .cu file
void vectorAddition(const float* a, const float* b, float* c, int n);
void printArray(const float* a, const unsigned int n) {
QString s = "(";
unsigned int ii;
for (ii = 0; ii < n - 1; ++ii)
s.append(QString::number(a[ii])).append(", ");
s.append(QString::number(a[ii])).append(")");
qDebug() << s;
}
int main(int argc, char* argv [])
{
QCoreApplication(argc, argv);
int deviceCount = 0;
int cudaDevice = 0;
char cudaDeviceName [100];
unsigned int N = 50;
float *a, *b, *c;
cuInit(0);
cuDeviceGetCount(&deviceCount);
cuDeviceGet(&cudaDevice, 0);
cuDeviceGetName(cudaDeviceName, 100, cudaDevice);
qDebug() << "Number of devices: " << deviceCount;
qDebug() << "Device name:" << cudaDeviceName;
a = new float [N]; b = new float [N]; c = new float [N];
for (unsigned int ii = 0; ii < N; ++ii) {
a[ii] = qrand();
b[ii] = qrand();
}
// This is the function call in which the kernel is called
vectorAddition(a, b, c, N);
qDebug() << "input a:"; printArray(a, N);
qDebug() << "input b:"; printArray(b, N);
qDebug() << "output c:"; printArray(c, N);
if (a) delete a;
if (b) delete b;
if (c) delete c;
}
The Cuda file vectorAddition.cu, which describes a simple vector addition, look like this:
#include <cuda.h>
#include <builtin_types.h>
extern "C"
__global__ void vectorAdditionCUDA(const float* a, const float* b, float* c, int n)
{
int ii = blockDim.x * blockIdx.x + threadIdx.x;
if (ii < n)
c[ii] = a[ii] + b[ii];
}
void vectorAddition(const float* a, const float* b, float* c, int n) {
float *a_cuda, *b_cuda, *c_cuda;
unsigned int nBytes = sizeof(float) * n;
int threadsPerBlock = 256;
int blocksPerGrid = (n + threadsPerBlock - 1) / threadsPerBlock;
// allocate and copy memory into the device
cudaMalloc((void **)& a_cuda, nBytes);
cudaMalloc((void **)& b_cuda, nBytes);
cudaMalloc((void **)& c_cuda, nBytes);
cudaMemcpy(a_cuda, a, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_cuda, b, nBytes, cudaMemcpyHostToDevice);
vectorAdditionCUDA<<<blocksPerGrid, threadsPerBlock>>>(a_cuda, b_cuda, c_cuda, n);
// load the answer back into the host
cudaMemcpy(c, c_cuda, nBytes, cudaMemcpyDeviceToHost);
cudaFree(a_cuda);
cudaFree(b_cuda);
cudaFree(c_cuda);
}
If you get this to work, then more complicated examples are self-evident, I think.
Edit (24-1-2013): I added the QMAKE_LFLAGS_DEBUG = /NODEFAULTLIB:msvcrtd.lib and the CONFIG(debug) with the extra D_DEBUG flag, such that it also compiles in debug mode.
Using msvc 2010 I found that the linker does not accept the -l parameter, however nvcc needs it. Therefore I made a simple change in the .pro file:
# Add the necessary libraries
CUDA_LIBS = cuda cudart
# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
# LIBRARIES IN FORMAT NEEDED BY NVCC
NVCC_LIBS = $$join(CUDA_LIBS,' -l','-l', '')
# LIBRARIES IN FORMAT NEEDED BY VISUAL C++ LINKER
LIBS += $$join(CUDA_LIBS,'.lib ', '', '.lib')
And the nvcc command (release version):
cuda.commands = $$CUDA_DIR/bin/nvcc.exe $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
$$NVCC_LIBS was inserted instead of $$LIBS.
The whole .pro file, which works for me:
QT += core
QT -= gui
TARGET = TestCUDA
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
# Define output directories
DESTDIR = release
OBJECTS_DIR = release/obj
CUDA_OBJECTS_DIR = release/cuda
# Source files
SOURCES += main.cpp
# This makes the .cu files appear in your project
OTHER_FILES += vectorAddition.cu
# CUDA settings <-- may change depending on your system
CUDA_SOURCES += vectorAddition.cu
#CUDA_SDK = "C:/ProgramData/NVIDIA Corporation/NVIDIA GPU Computing SDK 4.2/C" # Path to cuda SDK install
CUDA_DIR = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v5.0" # Path to cuda toolkit install
SYSTEM_NAME = win32 # Depending on your system either 'Win32', 'x64', or 'Win64'
SYSTEM_TYPE = 32 # '32' or '64', depending on your system
CUDA_ARCH = sm_11 # Type of CUDA architecture, for example 'compute_10', 'compute_11', 'sm_10'
NVCC_OPTIONS = --use_fast_math
# include paths
INCLUDEPATH += $$CUDA_DIR/include
#$$CUDA_SDK/common/inc/ \
#$$CUDA_SDK/../shared/inc/
# library directories
QMAKE_LIBDIR += $$CUDA_DIR/lib/$$SYSTEM_NAME
#$$CUDA_SDK/common/lib/$$SYSTEM_NAME \
#$$CUDA_SDK/../shared/lib/$$SYSTEM_NAME
# The following library conflicts with something in Cuda
QMAKE_LFLAGS_RELEASE = /NODEFAULTLIB:msvcrt.lib
QMAKE_LFLAGS_DEBUG = /NODEFAULTLIB:msvcrtd.lib
# Add the necessary libraries
CUDA_LIBS = cuda cudart
# The following makes sure all path names (which often include spaces) are put between quotation marks
CUDA_INC = $$join(INCLUDEPATH,'" -I"','-I"','"')
NVCC_LIBS = $$join(CUDA_LIBS,' -l','-l', '')
LIBS += $$join(CUDA_LIBS,'.lib ', '', '.lib')
# Configuration of the Cuda compiler
CONFIG(debug, debug|release) {
# Debug mode
cuda_d.input = CUDA_SOURCES
cuda_d.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
cuda_d.commands = $$CUDA_DIR/bin/nvcc.exe -D_DEBUG $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
cuda_d.dependency_type = TYPE_C
QMAKE_EXTRA_COMPILERS += cuda_d
}
else {
# Release mode
cuda.input = CUDA_SOURCES
cuda.output = $$CUDA_OBJECTS_DIR/${QMAKE_FILE_BASE}_cuda.o
cuda.commands = $$CUDA_DIR/bin/nvcc.exe $$NVCC_OPTIONS $$CUDA_INC $$NVCC_LIBS --machine $$SYSTEM_TYPE -arch=$$CUDA_ARCH -c -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
cuda.dependency_type = TYPE_C
QMAKE_EXTRA_COMPILERS += cuda
}
I also added some essential declarations, i.e. QT += core for the app to work, and also removed the SDK part, which I did not find useful in this case.
I tried this combination to work. Could not make it work due to a number of dependencies in
my project.
My final solution was to break the application into two separate applications on Windows
1)
CUDA application developed in VC and running as a service/DLL in Windows
GUI interface developed in QT and using the DLL for CUDA related tasks.
Hope it saves some time of others