I'm looking for the most portable and most organized way to include headers in C++. I'm making a game, and right now, my project structure looks like this:
game
| util
| | foo.cpp
| | foo.h
| ...
game-client
| main.cpp
| graphics
| | gfx.cpp
| | gfx.h
| ...
game-server
| main.cpp
| ...
Say I want to include foo.h from gfx.cpp. As far as I know, there are 3 ways to do this:
#include "../../game/util/foo.h. This is what I'm currently doing, but it gets messier the deeper into the folder structure I go.
#include "foo.h". My editor (Xcode) compiles fine with just this, but I'm not sure about other compilers.
#include "game/util/foo.h", and adding the base directory to the include path.
Which one is the best? (most portable, most organized, scales the best with many folders, etc.)
I found the below approach most useful when you are dealing with a large code base.
Public headers
module_name/include/module_name/public_header.hpp
module_name/include/module_name/my_class.hpp
...
Private headers and source
module_name/src/something_private.cpp
module_name/src/something_private.hpp
module_name/src/my_class.cpp
Notes:
module_name is repeated to ensure that the module name is provided while including a public header from this library/module.
This improves the readability and also avoids extra time spent to
find the location of the header when same name is used in multiple
modules.
Related
thanks for reply. I edit my question.
I want to distinguish whether the rpc_address is host or remote node, so I add some information in evet_stats.cc:
const auto &rpc_address = CoreWorkerProcess::GetCoreWorker().GetRpcAddress();
RAY_LOG(INFO) << rpc_address.SerializeAsString() << "\n\n";
then add core_worker_lib in the ray_common's dependency in BUILD.bzael, but I find the
ERROR: /home/Ray/ray/BUILD.bazel:2028:11: in cc_library rule //:gcs_client_lib: cycle in dependency graph:
//:ray_pkg
//:cp_raylet
//:raylet
//:raylet_lib
.-> //:gcs_client_lib
| //:gcs_service_rpc
| //:pubsub_lib
| //:pubsub_rpc
| //:grpc_common_lib
| //:ray_common
| //:core_worker_lib
`-- //:gcs_client_lib
so, MY question is:
how can I use CoreWorkerProcess in event_stat.cc ?
when I change the code of event_stat.cc, I use " pip3 install -e . --verbose" to recompile project, but it is too slow. Is there a another way to fast recompile when altering the cpp code of ray ?
Let's say I have my directory set up like this (as an example):
Main
Scripts
read.f95
Files
file.txt
How would I go about using relative path to read in file.txt in my read.f95 file?
I tried using relative path as
open(10, file='./Files/file.txt') and
open(10, file='../Files/file.txt')
but I am getting a path error on both ways. I have found this question, but the issue there was a too long filename, which is not what I am asking about.
Let's say you have structure like this:
.
|-- code
| |-- relative
| `-- relative.F90
`-- data
`-- data.dat
and you want to run your code from the directory that contains both: code and data. In that case, you can always concatenate location of your current directory and the location of data:
program relative
implicit none
real :: x, y
character (len=255) :: cwd
call getcwd(cwd)
open (10, file = trim(cwd)//'/data/data.dat', status = 'old')
read (10, *) x, y
close(10)
write(*, *) x, y
end program
while data file: data.dat looks following way
0.1 0.2
Once you run it, you will get what you want:
> ./code/relative
0.100000001 0.200000003
However, you have to be extra careful with this approach. It will work only in certain locations - it will work as long as data/data.dat is present. It might be useful in case you submit jobs into batch. Let's say you have no idea where you code will end up (in terms of explicit location). In that case, you can't hardcode it - it makes no sense. So, you have two choices: either you can use some wrapper script and pass location into your code via arguments, or, you can make sure that directory structure looks like you want and you know that everything is in place. In that case, using getcwd makes perfect sense.
I have a code base (mostly C++) which is well tested and crash free. Mostly. A part of the code -- which is irreplaceable, hard to maintain or improve and links against a binary-only library* -- causes all crashes. These to not happen often, but when they do, the entire program crashes.
+----------------------+
| Shiny new sane |
| code base |
| |
| +-----------------+ | If the legacy code crashes,
| | | | the entire program does, too.
| | Legacy Code | |
| | * Crash prone * | |
| | int abc(data) | |
| +-----------------+ |
| |
+----------------------+
Is it possible to extract that part of the code into a separate program, start that from the main program, move the data between these programs (on Linux, OS X and, if possible, Windows), tolerate crashes in the child process and restart the child? Something like this:
+----------------+ // start,
| Shiny new sane | ------. // re-start on crash
| code base | | // and
| | v // input data
| | +-----------------+
| return | | |
| results <-------- | Legacy Code |
+----------------+ | * Crash prone * |
| int abc(data) |
(or not results +-----------------+
because abc crashed)
Ideally the communication would be fast enough so that the synchronous call to int abc(char *data) can be replaced transparently with a wrapper (assuming the non-crash case). And because of slight memory leaks, the legacy program should be restarted every hour or so. Crashes are deterministic, so bad input data should not be sent twice.
The code base is C++11 and C, notable external libraries are Qt and boost. It runs on Linux, OSX and Windows.
--
*: some of the crashes/leaks stem from this library which has no source code available.
Well, if I were you, I wouldn't start from here ...
However, you are where you are. Yes, you can do it. You are going to have to serialize your input arguments, send them, deserialize them in the child process, run the function, serialize the outputs, return them, and then deserialize them. Boost will have lots of useful code to help with this (see asio).
Global variables will make life much more "interesting". Does the legacy code use Qt? - that probably won't like being split into two processes.
If you were using Windows only, I would say "use DCOM" - it makes this very simple.
Restarting is simple enough if the legacy is only used from one thread (the code which handles "return" just looks to see if it needs to restart, and kills the processes.) If you have multiple threads, then the shiny code will need to check if a restart is required, block any further threads, wait until all calls have returned, restart the process, and then unblock everything.
Boost::interprocess looks to have everything you need for the communication - it's got shared memory, mutexes, and condition variables. Boost::serialization will do the job for marshalling and unmarshalling.
I have a situation in which I need to interpose function calls made by a third-party static library to an iOS system framework (a shared library). Chances of cost-effective or timely maintenance support from the vendor of the static library are slim to non-existent.
In effect changing the calling sequence from:
+-------------+ +-------------+ +---------------------+
| Application | ---> | libVendor.a | ----> | FrameworkA (.dylib) |
+-------------+ +-------------+ +---------------------+
to:
+-------------+ +-------------+ +------------+ +---------------------+
| Application | ---> | libVendor.a | ----> | Interposer | ----> | FrameworkA (.dylib) |
+-------------+ +-------------+ +------------+ +---------------------+
The orthodox solutions to this problem is either to persuade the run-time linker to load a different library in place of FrameworkA or loading libraries with dlopen(). Neither of these are options on iOS.
One solution that does work is using sed to rename symbols in the the symbol table of libVendor.a.
Suppose I want to interpose calls FrameworkA_Foo() in FrameworkA made by functions in libVendor.a:
sed s/FrameworkA_Foo/InterposeA_Foo/g < libVendor.a > libInterposedVendor.a
And then interpose it with:
void InterposeA_Foo()
{
// do some stuff here
// ....
// then (maybe) forward
FrameworkA_Foo();
}
This works just so long as the length of the symbol names remains the same.
Whilst this approach works, it lacks elegance and feels fairly hacky. Do better solutions exist?
Other approaches considered:
Changing link order in order to get the interposing function to link to libVendor.a rather than the framework: Apple's linkers (unlike those on most UNIX platforms) recursively resolve symbols in libraries, and the order they are presented on the command line makes little difference)
Linker scripts: Not supported by lld
mach-o object-file editing tools: Found nothing that worked
[For clarity, this is C (rather than Objective-c) API and the toolchain is clang/LLVM. Using an Apple-supplied GCC is not an option due deprecation and lack of C++11 support]
We're in a period of development where there are a lot of code that is created which may be short-lived, as it's effectively scaffolding which at some point gets replaced with something else, but will often continue to exist and be forgotten about.
Are there any good techniques for finding the classes in a codebase that aren't used? Obviously there will be many false positives (eg library classes: you might not be using all the standard containers, but you want to know they're there), but if they were listed by directory then it may make it easier to see at a glance.
I could write a script that greps for all class XXX then searches again for all instances, but has to omit results for the cpp file that the class's methods were defined in. This would also be incredibly slow - O(N^2) for the number of classes in the codebase
Code coverage tools aren't really an option here as this is has a GUI that can't have all functions easily invoked programmatically.
Platforms are Visual Studio 2013 or Xcode/clang
EDIT: I don't believe this to be a duplicate of the dead code question. Although there is an overlap, identifying dead or unreachable code isn't quite the same as finding unreferenced classes.
If you're on linux, then you can use g++ to help you with this.
I'm going to assume that only when an instance of the class is created will we consider it as being used. Therefore, rather than looking just for the name of the class you could look for calls to the constructors.
struct A
{
A () { }
};
struct B
{
B () { }
};
struct C
{
C () { }
};
void bar ()
{
C c;
}
int main ()
{
B b;
}
On linux at least, running nm on the binary has the following mangled names:
00000000004005bc T _Z3barv
00000000004005ee W _ZN1BC1Ev
00000000004005ee W _ZN1BC2Ev
00000000004005f8 W _ZN1CC1Ev
00000000004005f8 W _ZN1CC2Ev
Immediately we can tell that none of the constructors for 'A' are called.
Using slightly modified information from this SO answer we can also get g++ to remove function call graphs that are not used:
Which results in:
00000000004005ba W _ZN1BC1Ev
00000000004005ba W _ZN1BC2Ev
So, on linux at least, you can tell that neither A nor C is required in the final executable.
I've come up with a simple shell script that will at least help to focus attention on the classes that are referenced the least. I've made the assumption that if a class isn't used then it's name will still appear in one or two files (declaration in the header and definition in the cpp file). So the script uses ctags to search for class declarations in a source directory. Then for each class it does a recursive grep to find all the files that mention the class (note: you can specify different class and usage directories), and finally it writes the file counts and class names to a file and displays them in numerical order. You can then review all the entries that only had 1 or 2 mentions.
#!/bin/bash
CLASSDIR=${1:-}
USAGEDIR=${2:-}
if [ "${CLASSDIR}" = "" -o "${USAGEDIR}" = "" ]; then
echo "Usage: find_unreferenced_classes.sh <classdir> <usagedir>"
exit 1
fi
ctags --recurse=yes --languages=c++ --c++-kinds=c -x $CLASSDIR | awk '{print $1}' | uniq > tags
[ -f "counts" ] && rm counts
for class in `cat tags`;
do
count=`grep -l -r $class $USAGEDIR --include=*.h --include=*.cpp | wc -l`
echo "$count $class" >> counts
done
sort -n counts
Sample output:
1 SomeUnusedClassDefinedInHeader
2 SomeUnusedClassDefinedAndDeclaredInHAndCppFile
10 SomeClassUsedLots