Resolving module name conflicts, need to get at ORD_MAP signature - sml

I'm working on a relatively large SML codebase. It was originally written to compile with MLton, but I'm now working with it under SML/NJ. I need to use RedBlackMapFn, which is defined in smlnj-lib.cm. However, I get an error:
elaborate/elaborate-bomenv.fun:9.20-9.27 Error: unbound signature: ORD_KEY
elaborate/elaborate-bomenv.fun:14.21-14.40 Error: unbound functor: RedBlackMapFn
elaborate/elaborate-bomenv.fun:32.20-32.27 Error: unbound signature: ORD_KEY
elaborate/elaborate-bomenv.fun:37.21-37.40 Error: unbound functor: RedBlackMapFn
So I assume that smlnj-lib.cm is not being pulled by CM. In an effort to fix this, I added $/smlnj-lib.cm to the sources.cm file in the directory that I'm working in. This causes a separate issue:
elaborate/sources.cm:25.1-25.18 Error: structure Random imported from $SMLNJ-LIB/Util/smlnj-lib.cm#243997(random.sml) and also from ./(sources.cm):lib/(sources.cm):basic/(sources.cm):random.sml
elaborate/sources.cm:25.1-25.18 Error: structure Queue imported from $SMLNJ-LIB/Util/smlnj-lib.cm#436143(queue.sml) and also from ./(sources.cm):lib/(sources.cm):basic/(sources.cm):two-list-queue.sml
No dice. I tried removing the Random structure that's coming from ./(sources.cm):lib/(sources.cm):basic/(sources.cm):random.sml, but it appears that it isn't equivalent to the one defined in the standard library, so I can't just substitute one for the other.
I'd like to use something like Python's import ... from ... as ...
mechanism to give a new name to the Random that's coming from the standard library, but CM's documentation doesn't offer any hints as to how I'd go about that.
How can I resolve a module naming conflict across multiple SML files?

I ended up splitting off the problematic file in to a separate .cm. The problem file here is elaborate-bomenv.{sig, fun}. The .cm file for this directory is sources.cm, which caused errors when it looked like:
Group
...
is
$/basis.cm
...
elaborate-bomenv.fun
elaborate-bomenv.sig
...
So instead, I made an elaborate-bomenv-sources.cm that looks like:
Group
signature ELABORATE_BOMENV
functor BOMEnv
is
$/smlnj-lib.cm
...
elaborate-bomenv.sig
elaborate-bomenv.fun
and changed the original sources.cm to read:
Group
...
is
$/basis.cm
...
./elaborate-bomenv-sources.cm
...
Which is ugly, but it works.

Related

Get ScriptOrigin from v8::Module

It seems trivial, but I've searched far and wide.
I'm using this resource to make v8 run with ES Modules and I'm trying to implement my own search/load algorithm. Thus far, I've managed to make a simple system which loads a file from a known location, however I'd like to implement external modules. This means that the known location is actually unknown throughout the application. Take the following directory tree as an example:
~/
- index.js
import 'module1_index'; // This is successfully resolved to /libs/module1/module1_index.js
/libs/module1/
- module1_index.js
export * from './lib.js' // This import fails because it is looking for ./lib.js in ~/source
- lib.js
export /* literally anything */
The above example begins by executing the index.js file from ~. When module1_index.js is executed, lib.js is looked for from ~ and consequently fails. In order to address this, the files must be looked for relative to the file being executed at the moment, however I have not found a means to do this.
First Attempt
I'm given the opportunity to look for the file in the callResolve method (main.cpp:280):
v8::MaybeLocal<v8::Module> callResolve(v8::Local<v8::Context> context, v8::Local<v8::String> specifier, v8::Local<v8::Module> referrer)
or in loadModule (main.cpp:197)
v8::MaybeLocal<v8::Module> loadModule(char code[], char name[], v8::Local<v8::Context> cx)
however, as mentioned, I have found no function by which to extract the ScriptOrigin from the module. I should mention, when files are successfully resolved, the ScriptOrigin is initiated with the exact path to the file, and is reliable.
Second Attempt
I set up a stack, which keeps track of the current file being executed. Every import which is made is pushed onto the stack. Once the file has finished executing, it is popped. This also did not work, as there was no way to reliably determine once the file had finished executing.
It seems that the loadModule function does just that: loads. It does not execute, so I cannot pop after the module has loaded, as the imports are not fully resolved. The checkModule/execModule functions are only invoked on dynamic imports, making them useless to determining the completion of a static import.
I'm at a loss. I'm not familiar with v8 enough to know where to look, although I have dug through some NodeJS source code looking for an implementation, to no avail.
Any pointers are greatly appreciated.
Thanks.
Jake.
I don't know much about module resolution, but looking at V8's sources, I can see an example mapping a v8::Module to a std::string absolute_path, which sounds like what you're looking for. I'm not copying the whole code here, because the way it uses custom metadata is a bit involved; the short story is that it keeps a std::unordered_map to keep data about each module's source on the side. (I wonder if it would be possible to use Module::ScriptId() as that map's key, for simplification.)
Code search finds a bunch more example uses of InstantiateModule, mostly in tests. Tests often serve as useful examples/documentation :-)

Custom submodules in pytorch / libtorch C++

Full disclosure, I asked this same question on the PyTorch forums about a few days ago and got no reply, so this is technically a repost, but I believe it's still a good question, because I've been unable to find an answer anywhere online. Here goes:
Can you show an example of using register_module with a custom module?
The only examples I’ve found online are registering linear layers or convolutional layers as the submodules.
I tried to write my own module and register it with another module and I couldn’t get it to work.
My IDE is telling me no instance of overloaded function "MyModel::register_module" matches the argument list -- argument types are: (const char [14], TreeEmbedding)
(TreeEmbedding is the name of another struct I made which extends torch::nn::Module.)
Am I missing something? An example of this would be very helpful.
Edit: Additional context follows below.
I have a header file "model.h" which contains the following:
struct TreeEmbedding : torch::nn::Module {
TreeEmbedding();
torch::Tensor forward(Graph tree);
};
struct MyModel : torch::nn::Module{
size_t embeddingSize;
TreeEmbedding treeEmbedding;
MyModel(size_t embeddingSize=10);
torch::Tensor forward(std::vector<Graph> clauses, std::vector<Graph> contexts);
};
I also have a cpp file "model.cpp" which contains the following:
MyModel::MyModel(size_t embeddingSize) :
embeddingSize(embeddingSize)
{
treeEmbedding = register_module("treeEmbedding", TreeEmbedding{});
}
This setup still has the same error as above. The code in the documentation does work (using built-in components like linear layers), but using a custom module does not. After tracking down torch::nn::Linear, it looks as though that is a ModuleHolder (Whatever that is...)
Thanks,
Jack
I will accept a better answer if anyone can provide more details, but just in case anyone's wondering, I thought I would put up the little information I was able to find:
register_module takes in a string as its first argument and its second argument can either be a ModuleHolder (I don't know what this is...) or alternatively it can be a shared_ptr to your module. So here's my example:
treeEmbedding = register_module<TreeEmbedding>("treeEmbedding", make_shared<TreeEmbedding>());
This seemed to work for me so far.

Org-mode library of babel: can't #'CALL what I define

I want to use the library of babel of org-mode to define a new Clojure function that would be accessible to any org-mode document.
What I did is to define that new function in a named codeblock like this:
#+NAME: foo
#+BEGIN_SRC clojure
(defn foofn
[]
(println "foo test"))
#+END_SRC
Then I saved that into my library of bable using C-c C-v i, then I selected the org file to save in the library and everything looked fine.
Then in another org file I wanted to call that block such that it becomes defined in that other context. So I used the following syntax:
#+CALL: foo
However when I execute that org file I am getting the following error:
Reference `nil' not found in this buffer
Which tell me that it can't find that named block.
Any idea what I am doing wrong? Also once it works, is there a way to add new parameters to that code block when called using #+CALL:?
Finally, where is supposed to be located my library of babel? (how to know if it got properly added or not?)
I am obviously missing some core information that I can't find in the worg documentation.
Try:
#+CALL: foo()
Also, check the value of the variable org-babel-library-of-babel to make sure that the C-c C-v i worked properly.

How do you compile multiple files in different folders in D?

project
---|source
------ |controllers
-------|models
-------|lib
----------|field.d
-------|app.d
I run dub but I get this error:
Error: module field from file ... conflicts with another module field from
file source/lib/field.d
field.d looks like this:
module field;
class Field(T){
this(T def_val,bool required,string help_text);
bool validate();
private bool _validate();
}
Always put a module statement in any file that is going to be imported, and use a package name consistently as well to avoid conflicts.
So, instead of calling it simply module field;, call it module myapplication.field;, or even module myapplication.lib.field;, and of course, also import it by the same full name when you use it.
I'm not sure if dub will just work like that though (I don't use it personally), but the language lets you give a module any name, even if it doesn't match the filename, which helps in situations like this, avoiding name conflicts.
In general, if you give them all full, unique names, then compile them all at once: dmd app.d lib/field.d [and any other files your project has], things will just work.

RestKit entity mapping for UnitTests does not work

I'm trying to create my RKEntityMapping outside of my UnitTest. The problem I have is it only works if I create it inside my test. For example, this works:
RKEntityMapping *accountListMapping = [RKEntityMapping mappingForEntityForName:#"CustomerListResponse" inManagedObjectStore:_sut.managedObjectStore];
[accountListMapping addAttributeMappingsFromDictionary:#{#"count": #"pageCount",
#"page": #"currentPage",
#"pages": #"pages"}];
While the following does now work. The all to accoutListMapping returns exactly what is shown above using the same managed object store:
RKEntityMapping *accountListMapping = [_sut accountListMapping];
When the RKEntityMapping is created in _sut I get this error:
<unknown>:0: error: -[SBAccountTests testAccountListFetch] : 0x9e9cd10: failed with error:
Error Domain=org.restkit.RestKit.ErrorDomain Code=1007 "Cannot perform a mapping operation
with a nil destination object." UserInfo=0x8c64490 {NSLocalizedDescription=Cannot perform
a mapping operation with a nil destination object.}
I'm assuming the nil destination object it is referring to is destinationObject:nil.
RKMappingTest *maptest = [RKMappingTest testForMapping:accountListMapping
sourceObject:_parsedJSON
destinationObject:nil];
Make sure that the file you have created has a target membership of both your main target, and your test target. You can find this by:
clicking on the .m file of your class
open the utilities toolbar (the one on the right)
in the target membership section tick both targets.
This is because if your class does not have target membership to your test target, the test target actually creates a copy of the class that you have created, meaning it has a different binary file to the main target. This leads to that class using the test's version of the RestKit binary, rather than the main projects RestKit. This will lead to the isKindOfClass method failing when it tries to see if the mapping you have passed is of type RKObjectMapping from the main project, because it is of type RKObjectMapping from the test projects version of RestKit, so your mapping doesn't get used, and you get your crash.
At least this is my understanding of how the LLVM compiler works. I'm new to iOS dev so please feel free to correct if I got something wrong.
This problem may also be caused by duplicated class definitions, when including RestKit components for multiple targets individually when using Cocoapods.
For more information on this have a look at this answer.
I used a category on the Mapped object for example
RestKitMappings+SomeClass
+ (RKObjectMapping*)responsemappings {
return mappings;
}
now this category has to be included in the test target as well otherwise the mapping will not be passed.
When you're running a test you aren't using the entire mapping infrastructure, so RestKit isn't going to create a destination object for you. It's only going to test the mapping itself. So you need to provide all three pieces of information to the test method or it can't work.