There's a file, just a single file, there's no shards or anything else.
/alex/projects/inspector/inspector.cr
I want to require it in another file in another folder
/alex/projects/my-project/play.cr
This won't work
require "/alex/projects/inspector/inspector"
require "/alex/projects/inspector/inspector.cr"
This also not working
CRYSTAL_PATH=$CRYSTAL_ROOT/src:lib:/alex/projects/inspector
require "inspector"
require "inspector.cr"
require "./inspector"
require "./inspector.cr"
P.S.
I would like to avoid using shards etc. as I have no plans to share that file or publish it. It's just a file that used by couple of other files in different locations.
I solved it by creating symlink
ln -s /alex/projects/inspector/inspector.cr /alex/projects/my-project/inspector.cr
require "./inspector"
Currently, require is only relative (as you learned from your forum post). However, you were on the right track with CRYSTAL_PATH.
CRYSTAL_PATH is an environment variable used by the Crystal compiler which tells it where to look for dependencies. So instead of using it in code, as it appears you did, you should use it when building the executable:
CRYSTAL_PATH=$CRYSTAL_ROOT/src:lib:/alex/projects/inspector crystal build /alex/projects/my-project/play.cr
Note that CRYSTAL_ROOT must be defined for that exact command to work. If you need to find the current CRYSTAL_PATH in order to append to it, you can use crystal env.
Related
I'm taking over a project from a coworker that involves several extensive SAS process flows. I have all the files with all the same names and a copy of the process flows they used. Since the file paths in their processes are direct references to their computer, normally I would just re-import the files with the same output names and run the process from there. In a few cases I would have to recreate a query builder as I'm using a few .sas7bdat files from another project.
However, there are quite a few files involved and I may end up having to pass this to another coworker in a few months, and since I can't get a good look at exactly what the import task is doing I'm concerned I may have some of the variables imported incorrectly. Is there an easy way to just change the file path the import or other task refers to?
Given the updates in comments, there's two possibilities I see.
If the paths you're changing are, or can be, relative to the location of the EGP, then you can right click on the Project->Properties->File References and check "Use paths relative to the project...", which means instead of storing a file in c:\my EGP folder\my code folder\code.sas it would store it as my code folder\code.sas. So then if the whole project moves to another computer (or just any other folder) then it automatically has the right path. This is mostly useful for code or similar things.
Otherwise, you're going to have to convert things to SAS code modules. There you can use macro variables to define the locations of things.
I've had trouble with this issue across many languages, most recently with C++.
The Issue Exemplified
Let's say we're working with C++ and have the following file structure for a project:
("Project" main folder with three [modules, data, etc] subfolders)
Now say:
Our maincode.cpp is in the Project folder
moduleA.cpp is in modules folder
data.txt is in data folder
moduleA.cpp wants to read data.txt
So the way I'd currently do it would be to assume maincode.cpp gets compiled & executed inside the Project folder, and so hardcode the path data/data.txt in moduleA.cpp to do the reading (say I used fstream fs("data/data.txt") to do so).
But what if the code was, for some reason, executed inside etc folder?
Is there a way around this?
The Questions
Is this a valid question? Or am I missing something with the wd (working directory) concept fundamentals?
Are there any methods for working around absolute paths so as to solve this issue in C++?
Are there any universal methods for doing the same with any language?
If there are no reasonable methods, how would you approach this issue?
Please leave a comment if I missed any important details with the problem's illustration!
At some point the program has to make an assumption where the file(s) are. Either by getting it from user input or a relative path with the presumed filename. As already said in the comments, C++ recently got std::filesystem added in C++17 which can help you making cross-platform code that interacts with the hosts' filesystem.
That being said, every program, big or small, has to make certain assumptions at some point, deleting or moving certain files is problematic for any program in case the program requires them to be at a certain location under a certain name. This is not solvable other than presenting the user with an error message etc.
As #Hatted Rooster said, it's not generally solvable for some arbitrary file without making some assumptions, however there are frameworks that allow you to "store" some files in the resources embedded into the executable (or otherwise). Those frameworks would usually allow your to handle such files in a opaque way, without the need to rely on a current working dir or relative paths.
For example, see the Qt Resource System.
Your program can deduce the path from argv[0] in the main call, if you know that it is always relative to your executable or you use an absolute path like "C:\myProgram\data\data.txt".
The second approach works in every language.
This question says the best place to store settings in linux is in ~/.config/appname
The program I'm writing needs to use a 99MB .dat file for recognizing facial landmarks, embedding it in the binary doesn't seem like a good idea.
Is there some default place to store resources on linux? currently it's just in the directory next to the executable, but this requires that the program is run with the current directory being the directory it's located in.
What's the best way to deal with resources like this on linux? (that could potentially be cross platform with at least OSX)
You should take a look at the Filesystem Hierarchy Standards. Depending on the data (will it change, is it constant across all installations, etc) the path where it gets placed will change based on the standards.
In general:
/usr/lib/program: includes object files, libraries, and internal binaries for an application
/usr/share/program: for all read-only architecture independent data files
/var/lib/program: holds state information pertaining to an application or the system
Those seem like pretty good places to start, and you can check the documentation to see if your app falls into one of those categories.
If the file is specific to the user running the app, it should be in a subdir of ~/ but AFAIK there's no standard, and the best choice depends much on the file type/usage. If it should be visible to the user via GUI, you could use ~/Desktop or ~/Downloads. If it's temporary, you can use ~/tmp or ~/var/tmp.
If it's not specific, you should place it in a subdir of /var. Again, the exact subdir may depend on its kind and other factors.
I know I can use install-data-hook to do anything I want after my data files are copied and, this way, I can adjust the file permissions the way I want.
My question, though, becomes before it.
Is there any way I can tell automake to set a standard permission mask for any data group before it gets copied?
I mean I want the resulting install to do its task using the correct mask rather than letting it to use the standard 0644 and after it correct all the file permissions.
In other words, I want the task do get done right at first and not having to fix it later.
Is is possible?
Thanks!
Automake implements the GNU Standards. These state that data files should be installed using the command $(INSTALL_DATA), which should default to $(INSTALL) -m 644.
What you can do is overwrite the value of INSTALL_DATA in some Makefile.am, then all data files installed by this Makefile.am will use that definition. If you have two groups of data files that require different modes, you will have to move them in two different directories so they can have their own Makefile.
I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
I suggest using ccache as a way to speed up compilation of (mostly) the same code tree. The way it works is as following:
You define a place to be used as the cache (and the maximum cache size) by using the CCACHE_DIR environment variable
Your compiler should be set to ccache ${CC} or ccache ${CXX}
ccache takes the output of ${CC} -E and the compilation flags and uses that as a base for its hash. As long as the compiler flags, source file and the headers are all unchanged, the object file will be taken from cache, saving valuable compilation time.
Note that this method speeds up compilation of any source file that eventually produces the same hash. If you share source files across projects, ccache will handle them as well.
If you already use distcc and wish to use it with ccache, set the CCACHE_PREFIX environment variable to distcc.
Using ccache sped up our source tree compilation around tenfold.
A simple way to speed up your builds could be to use a local "build directory" on your disk. This way you can checkout into this directory and start the build. The first time it will take the full time, but after that it will (hopefully) only rebuild the files where the source code changed.
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
Woops, I missed your P.S. where you don't like having multiple named branches in the same repo and that you prefer separate clones.. sorry about that.
I too have somewhat large C++ projects and the clone-per-feature workflow didn't work for me very well. Firstly, I had to close down my Vim session and then reopen (many of the same) files once I've created the clone. Secondly, like you said, a lot of code must be recompiled unnecessarily. Thirdly, I have to keep track of where I've pushed to and pulled from - gets confusing when you start a new feature and then get sidetracked onto a new one. Before you know it you have many clones and not sure which ones need to be pushed back to your main.
You definitely don't want to use named branches (as I'm sure you know) to handle this as they are quite permanent.
What you need are bookmarks: https://www.mercurial-scm.org/wiki/BookmarksExtension
Bookmarks allow you to create lightweight (and otherwise anonymous) branches per feature by facilitating the naming of heads in your repo. These heads would normally be unnamed and you would have to look at the output of 'hg log' or use some graphical tool to find the revision numbers for the tip of your feature-branch. With bookmarks you can name them descriptive names like 'my-cool-feature' or 'bugfix-392'.
If you like the idea of bookmarks, I'd also recommend my own extension called 'tasks': http://bitbucket.org/alu/hgtasks. This extension works like bookmarks but adds some more functionality. It allows you created feature-branches (now called tasks) and suppress the pushing of incomplete tasks. This is handy when you have a few feature-branches at once. You may not be ready to push your 'my-cool-feature' task, but 'bugfix-392' is ready to go. Because tasks track a set of changesets (and not just one 'tip' changeset) there are some things you can do with tasks that you can't with bookmarks. See an example workflow here: http://x.zpuppet.org/2009/03/09/mercurial-tasks-extension/.
Mercurial also has local named branches, see the hg branch command.
If you insist on using hg clone to do branchy development, I guess you could try creating a folder link (shortcut under windows) in your repo to a shared obj folder. This will work with hg clone, but I'm not sure your build tool will pick it up.
Otherwise, you probably keep all your repos in one folder - just put your obj folder there (it shouldn't be under source control anyways, imo). Use relative paths to refer to it.
A word of warning: many .o symbol tables (or equivalent) contain the full path name of the source file. If that other file changes (or if the path is not visible from the new directory) you may encounter weirdness when debugging.