I tried to find the answer both online and in my books, and I have a hard time figuring out how this is exactly handled.
Let's take this scenario. I have a few files:
a.clj - namespace: aaa.a
b.clj - namespace: bbb.b
c.clj - namespace: ccc.c
d.clj - namespace: ddd.d
Each of these files does define a few functions. Then I have this sequence of required statements:
a.clj: [:require [bbb.b] [ccc.c] [ddd.d]
b.clj: [:require [ccc.c]]
d.clj: [:require [bbb.b]]
Then my core application does [:require [aaa.a]]
My understanding is that when I compile my core application, the following happens:
Compile core file
Compile a.clj
Compile b.clj
Compile c.clj
Compile c.clj (is it skipped since it is already compiled?)
Compile d.clj
Compile b.clj (is it skipped since it is already compiled?)
My first question regarding this setup:
Are the files #5 and #7 recompiled or just skipped over by the compiler?
Then, let's say that I define a function foo in the file c.clj. If in #5 the file is in fact re-compiled, would the function foo change its identifier? Somemething like:
Was #<ccc$foo ccc.c$foo#431a3bbd> when first compiled
Would be #<ccc$foo ccc.c$foo#632a3cdt> when compiled for the second time (if it is indeed compiled the second time)
I am asking these questions since what I think I am experiencing is that the files get re-compiled and the reference to my functions are changing depending on how files are required in the project.
But my intuition tell me that the already required files should be skipped if they are re-required down the road. However, it looks like that this is not what is happening, and so the reason for this question.
However, really tracking down this behavior is not a simple task, and it is why I am seeking for a deeper understanding on the impact of such cascading require statements before continuing my debugging.
require takes an optional :reload or :reload-all key that respectively ask for the ns in question to be recompiled, or for a recursive recompilation of all namespaces from that file. If you do not specify :reload or :reload-all, the namespace will not be reloaded. This can be verified with a simple println at the top level of the namespace (outside of any definition). Changing identifiers should not be a problem, because your code should not be referring to identifiers, it should be referring to vars which resolve to an identifiers. Even if the value of a var changes (rebound), functions that capture the old Object that the var pointed to will still see the value (it cannot be collected by gc because they still hold a reference to it).
Related
I'm writing function libraries in Python 2.7.8, to use in some UAT testing using froglogic Squish. It's for my employer, so I'm not sure how much I can share and still conform to company privacy regulations.
Early in the development, I put some functions in some very small files. There was one file that contained only a single function. I could import the file and use the function with no problem.
I am at a point where I want to consolidate some of those tiny files into a larger file. For some reason that completely eludes me, some of the functions that I copy/pasted into this larger file, are not being found, and a "NameError: global name 'My_variableStringVerify' is not defined" error is displayed, for example. (I just added the "My_", in case there was a name collision with some other function...)
This worked with the EXACT same simple function in a separate 'module'. Other functions in this python file -- appearing both before and after this function in the new, expanded module -- are being found and used without problems. The only module this function needs is re. I am importing that. I deleted all the pyc files in the directory, in case that was not getting updated (I'm pretty sure it was, from the datetime on the pyc file).
I have created and used dozens of functions in a dozen of my 'library modules', all with no issues. What's so special about this trivial, piece of crap function, as a part of a different module? It worked before, and it STILL works -- as long as I do not try to use it from the new library module.
I'm not python guru, but I have been doing this kind of thing for years...
Ugh. What a fool. The answer was in the error, after all: "global name xxx is not found". I was trying to use the function directly inside a Squish API call, which is the global scope. Moving the call to my function outside of the Squish API call (using it in the local scope), it worked fine.
The detail that surprised me: I was using "from foo import *", in both cases (before and after adding it to another 'library' module of mine).
When this one function was THE ONLY function in foo, I was able to use it in the global scope successfully.
When it was just one of many functions in foo-extended (names have been changed, to protect the innocent), I could NOT use it in the global scope. I had to reference it in the local scope.
After spending more time reading https://docs.python.org/2.0/ref/import.html (yes, it's old), I'm surprised it appeared in the global scope in either case. That page did state that "(The current implementation does not enforce the latter two restrictions, but programs should not abuse this freedom, as future implementations may enforce them or silently change the meaning of the program.)" about scope restrictions with the "from foo import *" statement.
I guess I found an edge case that somehow skirted the restriction in this implementation.
Still... what a maroon! Verifies my statement that I am no python guru.
the project I'm working on currently consists of three basic build targets:
The core, containing almost all of the classes
The tests, containing only a few test classes and a main function. They touch (hopefully) every piece of code in the core.
The client, which only consists of a main function that used to create one object from the core and call what currently is a Hello World function. Now the main is completely empty and it didn't do anything to kill the error.
The linking error appears only if I build the client. It's about a static const in the core, looking like this:
class Transition
{
private:
Transition();
...more declarations...
public:
static const Transition NO_TRANSITION
...more declarations...
}
Usage in Map.cpp:
Transition Map::searchTransition(Coordinate p, Direction d)
{
...code...
return Transition::NO_TRANSITION;
}
This is what I'm told by by g++:
obj/gpp/Debug/Game/Map/Map.o:Map.cpp: (.rdata$.refptr._ZN10Transition13NO_TRANSIT
IONE[.refptr._ZN10Transition13NO_TRANSITIONE]+0x0): undefined reference to `Transition::NO_TRANSITION'
Map.cpp is also part of the core, it includes Transition.h, and the .o files are right where they are expected to be. There are no forward declarations between the two involved files.
What bothers me most: This only happens if I build the client. It works perfectly fine if I link the core with the test classes and their main instead, which means only more code to be linked. The only thing removed is a tiny or even empty main function that got replaced with a much bigger one that actually uses Map and Transition.
Also, that static const is not new and has never caused problems in the past. Since the tests accept it, I would think everything is perfectly fine, but apparently only as long as the tests are linked to it.
I've tried recreating the error in a smaller project, with the makefile being (mostly) the same, but the error won't show up. I have absolutely no idea what the important difference might be.
I'm using g++ 4.8 with -std=c++11 under cygwin. Visual Studio accepts the same code without trouble, and under true linux I couldn't test yet, though I expect it to be the same as with cygwin.
Does anybody have an idea what might be going wrong here?
EDIT: The "strange" behavior happened because there actually was a definition of Transition::NO_TRANSITION in the tests.
You only declared it and thus there's no physical location where its data are stored. You should add definition to your Transition source and link the created object file with your tests.
// Transition.cpp
#include "transition.h"
const Transition Transition::NO_TRANSITION;
Note that the linker works somehow lazily, i.e. the symbols you don't reference are not searched for, thus your older code that didn't use actual NO_TRANSITION physical location compiled and linked fine.
You have to define the static variable. In your CPP, add this line.
static const Transition::Transition NO_TRANSITION
That will do.
I have a huge template file and only few functions are used, and I want to isolate that part for test and comment the other half. How can i find what's the best way to do this ?
How can I do this on a Windows system and the template file is .hxx ?
I like Mohammad's answer. Oops... he removed it - but basically - use a tool like nm - I don't know a windows equivalent but there's sure to be one - to query the objects for instantations. While your templates may be in a .hxx, you can only meaningfully talk about the subset of methods instantiated by some body of client code. You may need to do this analysis with inlining disabled, to ensure the function bodies are actually instantiated in a tangible form in the object files.
In the less likely event that you might have instantiated stuff because some code handles cases that you know the data doesn't - and won't evolve to - use, then you may prefer automated run-time coverage analysis. Many compilers (e.g. GCC's g++ -ftest-coverage) and tools (e.g. purecov) provide this.
How about commenting out the whole file, then uncommenting individual methods when the linker complains, until the program can be compiled ?
By the way, if you are using Visual Studio, commenting the whole file is just a matter of using the following key shortcuts : Ctrl+A, then Ctrl+K+C. You can uncomment selected lines using Ctrl+K+U.
I'm working on a number crunching app using the CUDA framework. I have some static data that should be accessible to all threads, so I've put it in constant memory like this:
__device__ __constant__ CaseParams deviceCaseParams;
I use the call cudaMemcpyToSymbol to transfer these params from the host to the device:
void copyMetaData(CaseParams* caseParams)
{
cudaMemcpyToSymbol("deviceCaseParams", caseParams, sizeof(CaseParams));
}
which works.
Anyways, it seems (by trial and error, and also from reading posts on the net) that for some sick reason, the declaration of deviceCaseParams and the copy operation of it (the call to cudaMemcpyToSymbol) must be in the same file. At the moment I have these two in a .cu file, but I really want to have the parameter struct in a .cuh file so that any implementation could see it if it wants to. That means that I also have to have the copyMetaData function in the a header file, but this messes up linking (symbol already defined) since both .cpp and .cu files include this header (and thus both the MS C++ compiler and nvcc compiles it).
Does anyone have any advice on design here?
Update: See the comments
With an up-to-date CUDA (e.g. 3.2) you should be able to do the memcpy from within a different translation unit if you're looking up the symbol at runtime (i.e. by passing a string as the first arg to cudaMemcpyToSymbol as you are in your example).
Also, with Fermi-class devices you can just malloc the memory (cudaMalloc), copy to the device memory, and then pass the argument as a const pointer. The compiler will recognise if you are accessing the data uniformly across the warps and if so will use the constant cache. See the CUDA Programming Guide for more info. Note: you would need to compile with -arch=sm_20.
If you're using pre-Fermi CUDA, you will have found out by now that this problem doesn't just apply to constant memory, it applies to anything you want on the CUDA side of things. The only two ways I have found around this are to either:
Write everything CUDA in a single file (.cu), or
If you need to break out code into separate files, restrict yourself to headers which your single .cu file then includes.
If you need to share code between CUDA and C/C++, or have some common code you share between projects, option 2 is the only choice. It seems very unnatural to start with, but it solves the problem. You still get to structure your code, just not in a typically C like way. The main overhead is that every time you do a build you compile everything. The plus side of this (which I think is possibly why it works this way) is that the CUDA compiler has access to all the source code in one hit which is good for optimisation.
Whilst refactoring some old code I realised that a particular header file was full of function declarations for functions long since removed from the .cpp file. Does anyone know of a tool that could find (and strip) these automatically?
You could if possible make a test.cpp file to call them all, the linker will flag the ones that have no code as unresolved, this way your test code only need compile and not worry about actually running.
PC-lint can be tunned for dedicated purpose:
I tested the following code against for your question:
void foo(int );
int main()
{
return 0;
}
lint.bat test_unused.cpp
and got the following result:
============================================================
--- Module: test_unused.cpp (C++)
--- Wrap-up for Module: test_unused.cpp
Info 752: local declarator 'foo(int)' (line 2, file test_unused.cpp) not referenced
test_unused.cpp(2) : Info 830: Location cited in prior message
============================================================
So you can pass the warning number 752 for your puropse:
lint.bat -"e*" +e752 test_unused.cpp
-e"*" will remove all the warnings and +e752 will turn on this specific one
If you index to code with Doxygen you can see from where is each function referenced. However, you would have to browse through each class (1 HTML page per class) and scan for those that don't have anything pointing to them.
Alternatively, you could use ctags to generate list of all functions in the code, and then use objdump or some similar tool to get list of all function in .o files - and then compare those lists. However, this can be problematic due to name mangling.
I don't think there is such thing because some functions not having a body in the actual source tree might be defined in some external library. This can only be done by creating a script which makes a list of declared functions in a header and verifies if they are sometimes called.
I have a C++ ftplugin for vim that is able is check and report unmatched functions -- vimmers, the ftplugin suite is not yet straightforward to install. The ftplugin is based on ctags results (hence its heuristic could be easily adapted to other environments), sometimes there are false positives in the case of inline functions.
HTH,
In addition Doxygen (#Milan Babuskov), you can see if there are warnings for this in your compiler. E.g. gcc has -Wunused-function for static functions; -fdump-ipa-cgraph.
I've heard good things about PC-Lint, but I imagine it's probably overkill for your needs.