Printing Coq definition without loading library - ocaml

Coq is using a module system similar to OCaml. In OCaml, we can apply a function like Module_A.Module_B.Func and use Module_A.Module_B. to find the path to Func.
However, I cannot do similar things in Coq. For instance, if I just run Print Coq.Arith.Minus.minus_n_O. Coq reports Coq.Arith.Minus.minus_n_O is not a defined object.
I must first load the library and then can print the object. In the following case, it is successful.
From Coq Require Export Arith.Minus.
Print Coq.Arith.Minus.minus_n_O.
Is there a way to run Print Coq.Arith.Minus.minus_n_O without loading the library just as applying a function in OCaml?

In Coq there is a distinction between requiring and importing a module. Require is the keyword that lets you load the library while Import lets you drop the prefix. Export M does Import M but also says that when the current module is itself imported, then M will be imported too.
I would say what you want is
From Coq Require Arith.Minus.
Print Coq.Arith.Minus.minus_n_O.
without the Export.
To compare with ocaml, Import is like open and Require is just implicit in ocaml: a module is imported when it is used basically.
You can find more in the documentation of Import, Export and Require. Also in general to find more about commands, just look up the command index.

Related

What is the correct way to read this github's ocaml compiler project with bash scripts involved?

https://github.com/nlsandler/nqcc
At the above link, there is the compiler created by the author of this blog: https://norasandler.com/2017/11/29/Write-a-Compiler.html
I read through the first post and was faced with the problem that I almost always face when looking at a project on Github. Where to start?
I know the syntax for OCaml more or less, so I can read a single OCaml program and sort of understand what it does, but with a project at this level, I don't even know where the files of src/ are being called! You call the nqcc, and then what happens? How do we get to the ml files in src/? I'm having a hard time wrapping my head around this. Could someone guide me in how to navigate a huge project like this effectively?
In general, it involves understanding the build system, but your particular example is pretty easy to understand and is very transparent.
You need to know only two rules:
a binary foo corresponds to file foo.ml;
a module Foo corresponds to file Foo.ml1.
By applying these rules, we can figure out that nqcc.ml is the entry point. It calls the compile function which has the following definition (copied here for the ease of reference)
let compile prog_filename =
let source_lines = File.lines_of prog_filename in
let ast = Enum.reduce (fun line1 line2 -> line1^" "^line2) source_lines
|> Lex.lex
|> Parse.parse
in
Gen.generate prog_filename ast
So it refers to File, Enum, Lex, Parse, and Gen modules. The first two comes from the outside of the project (from the batteries library, which provides an extension to the OCaml standard library). While the last three correspond to lex.ml, parse.ml, and gen.ml files correspondingly.
1)) An optional but useful third rule:
a module Foo has the interface file named foo.mli
The interface file is sort of like a header file and make contain only types, and usually contains documentation.
The stuff in src/ gets compiled to nqcc.byte by setup.ml, which is run by the Makefile. setup.ml knows to do this by looking at the _oasis file, because all it does is call out to an ocaml build framework called OASIS. The nqcc shell script runs $(dirname $0)/nqcc.byte $1, which means "call the executable nqcc.byte in the same directory as this script, with the script's first argument".
How do you do this in general? Well, mostly experience. But starting with the Makefile or other build script is usually a good way to figure out what the main components are and how they hang together.

Does the SymPy function integration_steps reveal the result if the integration?

I'm using the SymPy function integral_steps to build a tool that, just like SymPy Gamma, reveals the integration steps when you ask it to integrate a function. My work-in-progress is available at https://lem.ma/1YH.
What I can't quite figure out is how to obtain the result of applying a particular rule. For example, consider the substitution rule
URule(u_var=_u, u_func=sin(x), constant=1, substep=ExpRule(base=E, exp=_u, context=exp(_u), symbol=_u), context=exp(sin(x))*cos(x), symbol=x)
The context field tells that the function being integrated is exp(sin(x))*cos(x) and that the rule uses a particular substitution - but what's the result of the integration so I can report to the user the same way SymPy Gamma does it. What I currently do is call integrate at every step, but that seems quite inefficient.
Perhaps there's an option that one can pass to integral_steps to make that information available?
SymPy Gamma is open source as SymPy itself. Looking at its module intsteps I see lines like
self.append("So, the result is: {}".format(
self.format_math(_manualintegrate(rule))))
So, the way to obtain the outcome of a rule from the rule is to call _manualintegrate(rule), which needs to be imported as
from sympy.integrals.manualintegrate import _manualintegrate
I imagine reading the rest of intsteps.py will be useful as well.

Structuring a Library in SML

I'm currently building a testing library in Standard ML (using Poly/ML as the interpreter). I have the following directory structure:
project/a.sml
project/src/b.sml
project/src/c.sml
...
Where a.sml is just a bunch of calls to use
use "src/b.sml"
use "src/c.sml"
...
b.sml, c.sml etc. are all structure definitions like this
structure ComponentX
struct
...
end
which form nice, logically separated components of the library. I sometimes also create one module in one file, and then introduce a substructure within the same module in another file.
I can then use the testing library fine within the root directory of the project, by calling use "a.sml".
However, I can't seem to be able to use the code outside of its own directory, which is a bit of an issue. For example, say I'm in the parent directory of project. If I then call use "project/a.sml", the subsequent calls to use "src/x.sml" try to find a src directory in the parent (which doesn't exist).
Is there some way to do a relative use, or is there a better way to structure this altogether?
The use function itself in Poly/ML doesn't change the path when it is used recursively. You will need to change the path within the sub-directory explicitly using OS.FileSys.chDir. use is just a function so you could redefine it if you wanted. The OS.Path and OS.FileSys structures could be useful.
An alternative is to reorganise your code to make use of PolyML.make. You would have to rename your files to match the name of the structure that each file contains e.g. ComponentX.sml would contain structure ComponentX. For more on this see polyml.org/documentation/Reference/PolyMLMake.html or a this answer about Poly/ML with nested directory structures.

python script to create another python script (or python executable)

As part of my code, I want to write a python method which when called, creates an executable file. (Another script that I can execute with python's interpreter is also fine).
This final script is almost fixed, except for a few input objects that only my method knows about, and which are necessary for the final executable to work (a dictionary for example). How can I link these objects to the final executable?
This sounds like a work for some templating engine. Jinja for example.
You can also use pickle to serialize/deserialize (save/load) the data which the other executables would use.

How to override Py_GetPrefix(), Py_GetPath()?

I'm trying to embed the Python interpreter and need to customize the way the Python standard library is loaded. Our library will be loaded from the same directory as the executable, not from prefix/lib/.
We have been successful in making this work by manually modifying sys.path after calling Py_Initialize(), however, this generates a warning because Py_Initialize is looking for site.py in ./lib/, and it's not present until after Py_Initialize has been called and we have updated sys.path.
The Python c-api docs hint that it's possible to override Py_GetPrefix() and Py_GetPath(), but give no indication of how. Does anyone know how I would go about overriding them?
You could set Py_NoSiteFlag = 1, call PyInitialize and import site.py yourself as needed.
Have you considered using putenv to adjust PYTHONPATH before calling Py_Initialize?
I see it was asked long ago, but I've just hit the same problem. Py_NoSiteFlag will help with the site module, but generally it's better to rewrite Modules/getpath.c; Python docs officially recommend this for “[a]n application that requires total control.” Python does import some modules during initialization (the one that hit me was encodings), so, unless you don't want them or have embedded them too, the module search path has to be ready before you call Py_Initialize().
From what I understand Py_GetPath merely returns module search path; Py_GetProgramFullPath is self-describing; and Py_GetPrefix and Py_GetExecPrefix are not used by anyone, except some mysterious “ILU”.
The following functions can be called before calling Py_Initialize():
Py_SetProgramName()
Py_SetPythonHome()
Py_SetPath()
All of these affect the way Python finds modules. I recommend reading the documentation on these functions and playing around with them.