Automatically eval a dependent file (buffer) - clojure

I have a clj file which defines a schema. There are about 3 or 4 files that depend on the definition of that schema to generate functions. I would like it so that whenever the schema file changes, the dependent files are automatically reevaluated within the repl.
This is quite useful because re-evaluation is also needed when macros are changed. I've looked at load-file but I don't think it does the trick. Are there any suggestion of how might one go about doing this?
I want to manually trigger a commented out form in A.
Files B, C, D have a dependency on A. If A changes then B, C, D gets evaluated.
:reload-all does the reverse, ie. A has a dependency on B, C and D and will evaluate all it's dependents.

I have a template project set up here:
https://github.com/io-tupelo/clj-template
It uses the lein-test-refresh plugin so that changed files are automatically reloaded upon each editor save, and then all unit tests are re-run. IMHO this is even better (& faster!) than experimenting in the REPL. The Koacha tool has similar capabilities.
You can also see this answer re :reload-all.

Related

Why am I getting some of my python functions, when I import my module, but not others?

I'm writing function libraries in Python 2.7.8, to use in some UAT testing using froglogic Squish. It's for my employer, so I'm not sure how much I can share and still conform to company privacy regulations.
Early in the development, I put some functions in some very small files. There was one file that contained only a single function. I could import the file and use the function with no problem.
I am at a point where I want to consolidate some of those tiny files into a larger file. For some reason that completely eludes me, some of the functions that I copy/pasted into this larger file, are not being found, and a "NameError: global name 'My_variableStringVerify' is not defined" error is displayed, for example. (I just added the "My_", in case there was a name collision with some other function...)
This worked with the EXACT same simple function in a separate 'module'. Other functions in this python file -- appearing both before and after this function in the new, expanded module -- are being found and used without problems. The only module this function needs is re. I am importing that. I deleted all the pyc files in the directory, in case that was not getting updated (I'm pretty sure it was, from the datetime on the pyc file).
I have created and used dozens of functions in a dozen of my 'library modules', all with no issues. What's so special about this trivial, piece of crap function, as a part of a different module? It worked before, and it STILL works -- as long as I do not try to use it from the new library module.
I'm not python guru, but I have been doing this kind of thing for years...
Ugh. What a fool. The answer was in the error, after all: "global name xxx is not found". I was trying to use the function directly inside a Squish API call, which is the global scope. Moving the call to my function outside of the Squish API call (using it in the local scope), it worked fine.
The detail that surprised me: I was using "from foo import *", in both cases (before and after adding it to another 'library' module of mine).
When this one function was THE ONLY function in foo, I was able to use it in the global scope successfully.
When it was just one of many functions in foo-extended (names have been changed, to protect the innocent), I could NOT use it in the global scope. I had to reference it in the local scope.
After spending more time reading https://docs.python.org/2.0/ref/import.html (yes, it's old), I'm surprised it appeared in the global scope in either case. That page did state that "(The current implementation does not enforce the latter two restrictions, but programs should not abuse this freedom, as future implementations may enforce them or silently change the meaning of the program.)" about scope restrictions with the "from foo import *" statement.
I guess I found an edge case that somehow skirted the restriction in this implementation.
Still... what a maroon! Verifies my statement that I am no python guru.

Is it possible to modify the Lua script to require?

When I call require 'name' in Lua, the name can be either a preloaded module name or a file that exists in a current working directory.
I have the following two questions:
A. I would like to know if it's possible to find out whether a preloaded module or a file will be required right before it will be required.
B. And if it's a file, I want to modify the script which will be required (by prepending/appending some code on top of existing one) and then require the modified script finally.
Are A and B both possible?
P.S.: I'm using Lua with C++.
Are A and B both possible?
Yes, as you can write your own "require" function that does what you need (including everything you describe). You can also look at package.searchers, as registering your function as one of the searchers may be enough to implement what you want.

How to organize subroutines for use by multiple commands?

I am working on creating a package with two new commands, say foo and bar.
For example, if foo.ado contains:
program define foo
...
rex
end
program define rex
...
end
But my other command, bar.ado, also needs to call rex. Where should I put rex?
I see the following few options:
Create a rex.ado file as well.
Create a rex.do file and include it from within both foo.ado and bar.ado using include "`c(sysdir_plus)'r/rex.do" at the bottom of each file.
Copy the code into both foo.ado and bar.ado, which seems ugly because now the code must be maintained in two places.
What is best practice for organizing subroutines that are needed by both foo and bar?
Also, should the subroutine be called rex, _rex, or something else — maybe _foobar_rex — to indicate it is actually a sub-command that foo and bar depend on to work correctly rather than a separate command intended to stand on its own?
Create a rex.ado file as well
Your question is a bit too broad. Personally, I would go with the first option to be safe, although it really depends on the structure of your project. Sometimes including rex in a single ado file may be enough. This will be the case, for example, if foo is a wrapper command. However, for most other use cases, including two commands sharing a common program, i strongly believe that you will need to have a separate ado file.
The second option is obviously unnecessary, since the first does the same thing, plus it does not have to load the program every single time you call it. The third option is probably the worst in a programming context, as it may create conflicts and will be difficult to maintain down the road.
With regards to naming conventions, I would recommend using something like _rex only if you include the program as a subroutine in an ado file. Otherwise, rex will do just fine and will also indicate that the program has a wider scope within your project. It is also better, in my opinion, to provide a more elaborate explanation about the intended use of rex using a comment at the start of the ado file, rather than trying to incorporate this in the name.

Changing global variable names

I working on a huge code base written many years ago. We're trying to implement multi-threading and I'm incharge of cleaning up global variables (sigh!)
My strategy is to move all global variables to a class, and then individual threads will use instances of that class and the globals will be accessed through class instance and -> operator.
In first go, I've compiled a list of global variables using nm by finding B and D group object names. The list is not complete, and incase of static variables, I don't get file and line number info.
The second stage is even more messy, I've to replace all globals in the code base with classinstance->global_name pattern. I'm using cscope Change text string for this. The problem is that in case of some globals, their name is also being used locally inside functions, and thus cscope is replacing them as well.
Any other way to go about it? Any strategies, or help please!
just some suggestions, from my experience:
use eclipse: the C++ indexer is very good, and when dealing with a large project I find it very useful to track variables. shift+ctrl+g (I have forgotten how to access to it from menus!) let you search all the references, ctrl+alt+h (open call hierarchy) the caller-callee trees...
use eclipse: it has good refactoring tools, that is able to rename a variable without touching same-name-different-scope variables. (it often fails in case there are templates involved. I find it good, better than visual studio 2008 counterpart).
use eclipse: I know, it get some time to get started with it, but after you get it, it's very powerful. It can deal easily with the existing makefile based project (file -> new -> project -> makefile project with existing code).
I would consider not to use class members, but accessors: it's possibile that some of them will be shared among threads, and need some locking in order to be properly used. So I would prefer: classinstance->get_global_name()
As a final note, I don't know whether using the eclipse indexer at command-line would be helpful for your task. You can find some examples googling for it.
This question/answer can give you some more hints: any C/C++ refactoring tool based on libclang? (even simplest "toy example" ). In particular I do quote "...C++ is a bitch of a language to transform"
Halfway there: if a function uses a local name that hides the global name, the object file won't have an undefined symbol. nm can show you those undefined symbols, and then you know in which files you must replace at least some instances of that name.
However, you still have a problem in the rare cases that a file uses both the global name and in another function hides the global name. I'm not sure if this can be resolved with --ffunction-sections; but I think so: nm can show the section and thus you'll see the undefined symbols used in foo() appear in section .text.foo.

Registering each C/C++ source file to create a runtime list of used sources

For a debugging and logging library, I want to be able to find, at runtime, a list of all of the source files that the project has compiled and linked. I assume I'll be including some kind of header in each source file, and the preprocessor __FILE__ macro can give me a character constant for that file, so I just need to somehow "broadcast" that information from each file to be gathered by a runtime function.
The question is how to elegantly do this, and especially if it can be done from C as opposed to C++. In C++ I'd probably try to make a class with a static storage to hold the list of filenames. Each header file would create a file-local static instance of that class, which on creation would append the FILE pointer or whatever into the class's static data members, perhaps as a linked list.
But I don't think this will work in C, and even in C++ I'm not sure it's guaranteed that each element will be created.
I wouldn't do that sort of thing right in the code. I would write a tool which parsed the project file (vcproj, makefile or even just scan the project directory for *.c* files) and generated an additional C source file which contained the names of all the source files in some kind of pre-initialized data structure.
I would then make that tool part of the build process so that every time you do a build this would all happen automatically. At run time, all you would have to do is read that data structure that was built.
I agree with Ferruccio, the best way to do this is in the build system, not the code itself. As an expansion of his idea, add a target to your build system which dumps a list of the files (which it has to know anyway) to a C file as a string, or array of strings, and compile this file into your source. This avoids a lot of complication in the source, and is expandable, if you want to add additional information, like the version number from your source code control system, who built the executable, etc.
There is a standard way on UNIX and Linux - ident. For every source file you create ID tag - usually it is assigned by you version control system, e.g. SVN keywords.
Then to find out the name and revision of each source file you just use ident command. If you need to do it at runtime check out how ident does it - source for it should be freely available.
Theres no way to do it in C. In C++ you can create a class like this:
struct Reg {
Reg( const char * file ) {
StaticDictionary::Register( file );
};
where StaticDictionary is a singleton container for all your file names. Then in each source file:
static Reg regthisfile( __FILE__ );
You would want to make the dictionary a Meyers singleton to avoid order of creation problems.
I don't think you can do this in the way you outline in a "passive" mode. That is, you are going to somehow run code for each source file to be added to the registry, it's hard to get it to happen automatically.
Of course, it's possible that you can make that code very unobtrusive using macros. It might be problematic for C source files that don't have an "entrypoint", so if your code isn't already organised as "modules", with e.g. an init() function for each module, it might be hard. Static initializing code might be possible, I'm not 100% sure if the order in which things are initialized creates problems here.
Using static storage in the registry module sounds like an excellent idea, a plain linked list or simple hash table should be easy enough to implement, if your project doesn't already include any general-purpose utility library.
In C++ your solution will work. It's guaranteed.
Edit: Just found out a solution in my head: Change a rule in your makefile to add
'-include "cfiles_register.h"' to each 'g++ file.cpp'.
%.o : %.cpp
$(CC) -include 'cfiles_register.h' -o $# $<
put your proposed in the question implemnatation to that 'cfiles_register.h'.
Using static instances in C++ would work fine.
You could do this also in C, but you need to use runtime specific features - for MSVC CRT take a look at http://www.codeguru.com/cpp/misc/misc/threadsprocesses/article.php/c6945/
For C - you could do it with a macro - define a variable named corresponding to your file, and then you could scan the symbols of your executable, just as an idea:
#define TRACK_FILE(name) char _file_tracker_##name;
use it in your my_c_file.c like this:
TRACK_FILE(my_c_file_c)
and than grep all file/variable names from the binary like this
nm my-binary | grep _file_tracker
Not really nice, but...
Horrible idea, I'm sure, but use a singleton. And on each file do something like
Singleton.register(__FILE__);
at global scope. It'll only work on cpp files though.
I did something like this years ago as a novice, and it worked. But I'd cringe to do it now. I'd add a build step now.
I agree with those who say that it is better to avoid doing this at run time, but in C, you can initialize a static variable with a function call, that is, in every file:
static int doesntmatter = register( __FILE__);