I have a list of source file that i have included in source variable. I have included all header files required in include variable. So my gn file looks something like this.
shared_library(mylib) {
sources = [
<all the source files>
]
includes = [
<all the header files>
]
}
And then i run ninja -C out/Output <mytarget>. But it gives me this error: You set the variable "includes" here and it was unused before it went out of scope.
Now here in the source files, the headers are wrapped in <> and if i change to "", all the errors are resolved but there must be a way to define include paths so that all the dependencies are resolved automatically.
Related
Let's say I'm considering a directory structure that looks something like the following for a C project using the bazel build system.
/util
util.c
util.h
/error
error.c
error.h
/math
math.c
math.h
/linalg
matrix.c
matrix.h
And, the following additional requirements:
Each directory should be built as a cc_library that contains or relinks its child cc_libraries.
cc_libraries (or their children) will depend on sibling cc_libraries. For example, matrix.c may #include error.h.
Are there ways to preserve this structure when using bazel that don't result in difficult to maintain BUILD files? Are nested packages usable?
/util
BUILD
util.c
util.h
/error
BUILD
error.c
error.h
/math
BUILD
math.c
math.h
/linalg
BUILD
matrix.c
matrix.h
Or, should the folder structure simply take a shallower form?
/util
BUILD
util.h
util.c
/error
BUILD
error.c
error.h
/math
BUILD
math.c
math.h
/linalg
BUILD
matrix.c
matrix.h
I think either approach could work, but I prefer the first.
There's no rule dictating what can (or cannot) depend on what, as long as it's visible to the rule and no cycles are formed. A heuristic like "deps only go sideways or down" doesn't prevent, but can help in avoiding cycles.
Nothing looks particularly difficult to maintain from this example, just specify your deps as needed:
cc_library(
name = "linalg",
hdrs = [ "matrix.h" ],
srcs = [ "matrix.c" ],
visibility = [ "//visibility:public" ],
deps = [ "//util/error" ],
)
Etc.
Ok so I am having an issue with errors in VSCode. Basically I decided to reorganize and move my header files into a separate folder, "include". My directory put simply is as follows:
-build
-include
|-SDL2
|-SDL2_Image
|-someHeaderFile1.h
|-someHeaderFile2.h
-src
|-main.cpp
|-someCppFile.cpp
-Makefile
My Makefile contains:
SRC_DIR = src
BUILD_DIR = build/debug
CC = g++
SRC_FILES = $(wildcard $(SRC_DIR)/*.cpp)
OBJ_NAME = play
INCLUDE_PATHS = -Iinclude -I /include
LIBRARY_PATHS = -Llib
COMPILER_FLAGS = -std=c++11 -Wall -O0 -g
LINKER_FLAGS = -lsdl2 -lsdl2_image
all:
$(CC) $(COMPILER_FLAGS) $(LINKER_FLAGS) $(INCLUDE_PATHS) $(LIBRARY_PATHS) $(SRC_FILES) -o $(BUILD_DIR)/$(OBJ_NAME)
The program compiles and runs, however, my issue is with VSCode as it shows an error having the include as : #include "someHeaderFile1.h" vs #include "../include/someHeaderFile1.h"
Any assistance would be appreciated.
You need to put that folder's path to the Include path. One way to do that is shown below. The screenshots are attached with each steps so that it(the process) would be more clear.
Step 1
Press Ctrl + Shift + P
This will open up a prompt having different options. You have to select the option saying Edit Configurations
Step 2
After selecting Edit Configurations a page will open with different options. You have to scroll down and go the the option saying Include Path and just paste the path to your include folder there.
Below is the picture after adding the include folder's path into the Include Path option.
Step 3
Now after adding the path to the include folder into the Include path field you can close this window and all the vscode errors that you mentioned will not be there anymore.
If you have install Microsoft C/C++ extension properly, and the directory you show is the root path of your VSCode workspace, you can add Include path options in C/C++: Edit configurations (UI), or edit .vscode/c_cpp_properties.json like:
{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**",
// Add your custom include path here
"${workspaceFolder}/include/**",
],
"defines": [],
"compilerPath": "/usr/bin/g++",
// ...other options
}
],
"version": 4
}
For more details refer to the document.
I have a CMakeLists.txt file in which I added:
set(CMAKE_CXX_FLAGS "-fprofile-arcs -ftest-coverage -pthread -std=c++11 -O0 ${CMAKE_CXX_FLAGS}")
It is generating the report files in:
project_root/build/CMakeFiles/project.dir/
BUT the files it generates have extentions .cpp.gcno, .cpp.gcda and .cpp.o.
Also, they are not in the same folder as the src files, which are at:
project_root/src/
When I move the report files to the src/ folder and execute
$ gcov main.cpp
main.gcno:cannot open notes file
But I get that error message. So I change the .cpp.gcno, .cpp.cdna and cpp.o to .gcno, .gcda and .o and finally I get the following:
gcov main.cpp
Lines executed:86.67% of 15
Creating 'main.cpp.gcov'
I have over 50 files and can't do this manually for each one.
I need to be able to run gcov once for all files and generate report for all files. I don't care where the files are generated.
It is generating the report files in: project_root/build/CMakeFiles/project.dir/
This is directory where all additional files are built for 'project' executable.
BUT the files it generates have extentions '.cpp.gcno', '.cpp.gcda' and '.cpp.o'
This is because CMake creates .cpp.o object file from .cpp source (you may see that running make VERBOSE=1. In accordance to -fprofile-arcs option's description, data file has suffix .cpp.gcno.
Also, they are not in the same folder as the src files
Data files are created in the same directory with object file.
Actually, created files are still work, if you call
gcov main.cpp.gcno
from the directory with .gcno files.
Apparently the standard CMake behavior to add an extension to give .cpp.o can be changed to replace an extension to give .o by using:
set(CMAKE_CXX_OUTPUT_EXTENSION_REPLACE ON)
I have a C++ application which I'm trying to build with scons which consists of several subprograms.
Each subprogram has its own source files in a subdirectory of the source directory. These source files, e.g. source/prog1/prog1.cpp, are compiled into object files which reside in the object directory, e.e. object/prog1/prog1.o.
This works fine since each source directory has its target directory, and there's no possibility of clashes.
However, what I'm trying to do is link these object files into executables, which would be in the same bin directory. So multiple source files (object/prog1, object/prog2, etc) would all go into the same target directory (bin).
The directory layout looks like this:
application
source
prog1
prog1.cpp
something.cpp
prog2
prog2.cpp
somethingelse.cpp
object
prog1
prog1.o
something.o
prog2
prog2.o
somethingelse.o
bin
??? <- what I'm concerned with
I'm trying to achieve that with the following SConstruct script:
env = Environment()
Export('env')
#common objects
common=env.SConscript("source/common/SConscript_object", variant_dir="object/common", duplicate=0)
Export('common')
#sub-programs
env.SConscript("source/prog1/SConscript_bin", variant_dir="bin", duplicate=0)
env.SConscript("source/prog2/SConscript_bin", variant_dir="bin", duplicate=0)
However, scons is complaining with the following error:
scons: *** 'bin' already has a source directory: 'source/prog1'.
The error goes away if I make it so that each subprogram has its own directory in the bin directory, e.g. variant_dir="bin/prog1".
So, my question is this: how can I link object files from multiple sources into the same variant dir?
In your case I would let SCons build the different binaries in their respective folder, and then use the Install builder to copy the binary files to the bin/ directory.
You would get something like:
env = Environment()
Export('env')
common = env.SConscript("source/common/SConscript_object", variant_dir="object/common", duplicate=0)
Export('common')
prog1 = env.SConscript("source/prog1/SConscript_bin", variant_dir="object/prog1", duplicate=0)
prog2 = env.SConscript("source/prog2/SConscript_bin", variant_dir="object/prog2", duplicate=0)
env.Install('bin', prog1)
env.Install('bin', prog2)
With the SConscript of the subprograms being something like
Import('env')
Import('common')
prog1 = env.Program('prog1', [ env.Glob(*.cpp), common ])
Return('prog1')
I think SCons refuses to build different targets into a unique variant directory because variants are designed to build a given target with different build settings, like debug and release mode.
I want to use some other project libraries in my implementation. The project has a /common folder where the libraries are located I want to include. In my makefile under LDLIBSOPTIONS, I included the path where /common folder is located like:
LDLIBSOPTIONS=-lpci -lpthread -I../../../OtherProj/Libs/common/
Then I include one .h file like:
#include <ExampleLib.h>
However I still get
fatal error: XXX.h: No such file or directory
What am I doing wrong? Thanks.
LDLIBSOPTIONS (more conventionally LDFLAGS) is used for specifying options to the linker. You need to specify the directory, using the -I flag, in CXXFLAGS:
CXXFLAGS += -I../../../OtherProj/Libs/common/
However given you are using non-standard names for your Makefile variables, CXXFLAGS might be called something like CXXOPTIONS, but the exact name is unknown to me.
Once this is solved you're going to be getting linker errors until you start specifying the library path using -L; perhaps:
LDLIBSOPTIONS = -L../../../OtherProj/Libs/common/ -lpci -lpthread