I wrote some tool, based on libclang, which get some file and create translation unit from it and parse ast tree, but the tool needs compile flags, so I must specify all needed flags for compilation the file. So, here is a small problem: I can not specify relative path to incude directories, because for create translation unit the tool create temporary file and copy all input file in it (this is needed, because tool also can work in interactive mode). So paths for temporary file and source file are different. I means that I should set absolute paths in -I flags of manually parse input flags and check it: is that relative or not? Can I specify some prefix for all relative include directories? Or set current directory for compiler?
Related
I've been asked to make a small mod to some software that was written back in the mid naughties on IAR Embedded Workbench v3.3
I have had the original source files copied from an old machine to one I have been given for the task.
For the moment I am simply trying to get the software compiling. It took me a while to realise, or at least I thought I'd realised, that the reason it couldn't open various header files was that, incredibly, all the include paths were absolute, not relative.
So, I changed all the paths to be $PROJ_DIR$ relative, but then started to get different files that couldn't be opened. Then realised that the machine they gave me just happened to have a very similar directory structure to the original machine used and, amazingly, had quite a few of the same files in the directory structure of this machine I'm using as was on the machine used to compile the code originally.
I then thought, OK, I'll just check I have got my relative paths correct by choosing one of the header files it was complaining about not finding and putting, in the Preprocessor tab, an absolute path to the directory on this machine I'm using that contained the header file it wanted. However, that still wouldn't find the header file!
Finally, I put an absolute path in the c file to point to the desired header file.
#include "C:\absolute__Path\stdtyp.h"
And it compiled.
To confirm:
Putting C:\absolute__Path
in the Project | Options | C/C++ compiler | Preprocessor tab will not work if I just have:
#include "stdtyp.h"
in the c file.
I have used IAR in the past - not that much - but I have used it and I was sure that's where you set your include directories. So, am I wrong, or can there be something else that is overriding that path in the Preprocessor tab as described above?
Edit: I'm not wrong, after having slept on it, I decided to create a new project with random directories, subdirectories and header files. Sure enough, if I set and remove $PROJ_DIR$ referenced paths in the preprocessor tab, the new project compiles, then doesn't. So, there must be something, presumably in the ewp file that is borking it.
It turns out you can override the paths on an individual file by file basis. So, the rogue files had the paths overridden and had absolute paths.
Right click on the file in EW and select Options.
That then for most file shows a load of greyed out boxes. What I'd failed to do was thoroughly check all files. The few I'd randomly checked were greyed out, but some files had their properties overridden here with different (and absolute) paths put there.
At least now the project can be easily copied between machines having used relative paths.
Following up on this question about including source files. I am including a Chapel modules that contains one file called classes.chpl, but my current project also has a classes.chpl. What is the correct disambiguation pattern? When I do
chpl -M/path/src
it notes the conflict, then chooses the classes.chpl in the current directory. Should I compile the module for export as in this page or is there another pattern.
== UPDATE ==
The directory structure looks like
projA/alpha.chpl
/classes.chpl
projB/beta.chpl
/classes.chpl
Where each project depends on the classes in the respective classes.chpl file. Trying to compile projA I am currently using
chpl alpha.chpl -M../projB/
But this causes a conflict, as it tries to use projA/classes.cphl for the classes in both beta.chpl and alpha.chpl.
As described in the module search paths tech note, the Chapel compiler searches for user modules by, in this order:
Looking at .chpl files specified on the command line
Looking at other .chpl files in the directories containing the files specified on the command line
Looking at .chpl files in the paths specified via the -M option or the CHPL_MODULE_PATH environment variable
Since the compiler finds the classes.chpl from the project directory using rule 2, and only finds the /path/src/classes.chpl with rule 3, it chooses the one in the project directory. To get it to choose /path/src/classes.chpl instead, you can specify it on the command line so it is found with rule 1.
chpl mainModule.chpl /path/src/classes.chpl
I'm writing a large OCaml project. I wrote a file foo.ml, which works perfectly. In a subdirectory of foo.ml's directory, there is a file bar.ml.
bar.ml references code in foo.ml, so its opening line is:
open Foo
This gives me an error at compile time:
Unbound module Foo.
What can I do to fix this without changing the location of foo.ml?
The easy path is to use one of OCaml build system like ocamlbuild or oasis. Another option would be jbuilder but jbuilder is quite opiniated about file organization and does not allow for the kind of subdirectory structure that you are asking for.
The more explicit path comes with a warning: OCaml build process is complicated with many moving parts that can be hard to deal with.
After this customary warning, when looking for modules, OCaml compiler first looks for module in the current compilation environment, then looks for compiled interface ".cmi" files in the directories specified by the "-I" option flags (plus the current directory and the standard library directory).
Thus in order to compile your bar.ml file, you will need to add the parent directory in the list of included directories with the -I .. option.
After all this, you will discover that during the linking phase, all object files (i.e. .cmo or .cmx) need to be listed in a topological order compatible with the dependency graph of your project.
Consequently, let me repeat my advice: use a proper build system.
When I compile my c++ program that uses Protobuf, and then run the linux strings command on the binary, one of the strings is a path to the generated cc file, with my home directory and everything. Obviously I'd like to eliminate my home directory and other personal information from the binary.
Where does this path come from and how can I prevent it from making it into the compiled binary?
The string comes from the embedded protobuf descriptor, which is used to perform dynamic introspection of protobuf types. Essentially, the descriptor describes your whole .proto file. The descriptor itself is encoded in protobuf format; see google/protobuf/descriptor.proto.
Now, the descriptor normally should not contain absolute paths like you describe. It really wants to contain "canonical" paths -- that is, the path name of the proto file relative to the source code root, or in other words, the path that you'd write in an import statement for that file. For instance, descriptor.proto's own canonical path is google/protobuf/descirptor.proto; to import it, you would write import "google/protobuf/descriptor.proto";.
The reason your descriptors are getting the full absolute filesystem path is because that is the path that you are passing to protoc, and you are not passing a -I flag to tell protoc where the root of your source tree is. Since protoc can't figure out the root of the source code, it is falling back to the file system root.
For instance, say your .proto file is /home/foo/myproj/src/frobber/baz.proto. Say that the src directory in this path is your "source root", meaning that you want people to write import "frobber/baz.proto"; to import your proto file. In that case, you want to invoke protoc like this:
protoc -I/home/foo/myproj/src /home/foo/myproj/src/frobber/baz.proto
Note that if you are running the command from, say, the myproj directory, then you probably shouldn't specify an absolute path at all:
protoc -Isrc src/frobber/baz.proto
It is very important that the -I flag here is a textual prefix of the source file name. protoc is dumb and only knows how to compare strings. It doesn't, for instance, know what the current directory is:
# DOES NOT WORK
cd /home/foo/myproj
protoc -I/home/foo/myproj/src src/frobber/baz.proto
And it also cannot canonicalize "..":
# DOES NOT WORK: protoc doesn't collapse "xyz/../".
protoc -Isrc xyz/../src/frobber/baz.proto
However ".." is OK if it's consistent, because again protoc only cares about a prefix match:
# OK: Prefix is consistent.
protoc -Ixyz/../src xyz/../src/frobber/baz.proto
If you'd rather not have a descriptor
You can compile your proto files in "lite mode" by placing the following line in your .proto file:
option optimize_for = LITE_RUNTIME;
In this mode, descriptors will not be included at all. Additionally, you can link against the "lite" version of the protobuf runtime library, which is much smaller than the regular version. However, many useful features will be disabled. The whole reflection interface will be gone, and anything that depends on reflection will be gone as well. For example, TextFormat, which is what the DebugString() method uses to convert messages into text to print for debugging, will be removed, therefore debugging will be harder.
I don't quite understand the path in protobuf. My file layout like this:
Top
A
a.proto
B
C
c.proto // import "A/a.proto";
I have written an RPC system based on protobuf and I need generate two kinds of files(client and server code) from c.proto. Client code should be placed in B and Server code still in C.
I can't write a correct command.
Top> protoc -I=. --client_out=./B/ C/c.proto will generate client code in B/C and the #include in code will have a wrong path.
Top/C> protoc -I=../ -I=./ --client_out=./ ./c.proto lead a protobuf_AddDesc_* error.
For every .proto file, protoc tries to determine the file's "canonical name" -- a name which distinguishes it from any other .proto file that may ever find its way into your system. In fact, ideally, the canonical name is different from every other .proto file in the world. The canonical name is the name you use when you import the .proto file from another .proto file. It is also used to decide where to output the generated files and what #includes to generate.
For .proto files specified on the command line, protoc determines the canonical name by trying to figure out what name you would use to import that file. So, it goes through the import paths (specified with -I) and looks for one that is a prefix of the file name. It then removes that prefix to determine the canonical name.
In your case, if you specify -I=. C/c.proto, then the canonical name is C/c.proto. If you specified -I=C C/c.proto, the canonical name would then simply be c.proto.
It is important that any file which attempts to import your .proto file imports it using exactly the canonical name determined when the file itself was compiled. Otherwise, you get the linker error regarding AddDesc.
In general, everything works well if you designate some directory to be the "root" of your source tree, and all of your code lives in a subdirectory of that with a unique name designating your project. Your "root" directory should be the directory you pass to both -I and --client_out. Alternatively, you can have separate directories for source files vs. generated files, but the generated files directory should have an internal structure that mirrors your source directory. You can then specify the generated files directory to --client_out, and when you run the C++ compiler, specify both the source and generated files directories in the include path.
If you have some other setup -- e.g. one where the .proto files live at a different canonical path from the .pb.h files -- then unfortunately you will have some trouble making protoc do what you want. Though, given that you are writing a custom code generator, you could invent whatever rules you want for the way its output files are organized, but straying from the rules the standard code generator follows might lead to lots of little pitfalls.