set up Concurrent ML in smlnj - sml

I am trying to get Concurret ML run in SMLNJ. I saw a post about using CM.make to do this but I could not find CM.make file on my system. Please tell how to resolve this

Well, I can load the library directly. For instance, in my case I could do
sml /opt/smlnj/cml/src/cml.cm
Knowing where the library is located, you could use CM.make. For instance, in my REPL if I do
CM.make "/opt/smlnj/cml/src/cml.cm";
It loads the CML library. With either approach if I do:
val r = CML.version
I get:
val r = {date="September 15, 1997",system="Concurrent ML",version_id=[1,0,10]}
: {date:string, system:string, version_id:int list}
The CM library should already be available in your current installation of SML. I did not have to do anything special to load it.
See the SML/NJ FAQ in the section about loading libraries.

Related

How do I avoid shadowing an stdlib module in Ocaml?

I have a program that uses the Bytes module from the Ocaml standard library and also opens the Core_kernel.Std module at the top of the file
open Core_kernel.Std
...
let let buf = Bytes.make bom_len '\x00' in
The problem I am having is that the latest version of Core_kernel introduced a new Bytes module that is shadowing the one from the standard library, which is resulting in a Unbound value Bytes.make compilation error.
Is there a way to solve this naming issue without getting rid of the open at the top of the file? If I did that it would require changing lots of things.
You could provide an alternative name for the Bytes module as such:
module B = Bytes
open Core_kernel.Std
let buf = B.make 10 '\x00'
and then do a search-replace in your code to change Bytes by B.
Another solution would be to avoid using open, but this would require a lot of changes in your code, I guess.
Core_kernel provides Caml module that binds everything that is available in standard library.
So, you could write this as
open Core_kernel.Std
...
let buf = Caml.Bytes.make bom_len '\x00' in
Unfortunately, Caml.Bytes is added only in 113.00.00 version that is not in OPAM yet.

Python speech recognition for Raspberry Pi 2

I am trying to find a Speech recognition library similar to PySpeech that will work on a Raspberry Pi 2. I am new to this and have tried researching but there are so many applications I just need help choosing the correct one.
All I am trying to do is, when a user says something the program will recognize keywords and open up the correct part of my code which will just display information about that keyword.
Right now I am using Python 2.7 and PyQt4 to display what I want but am willing to change if there is something easier such as KivyPi, PyGame, etc.
I am up for any ideas or any help to push me into the right direction.
Thank You!
I created a library called SpeakPython that helps Python developers do exactly this, and just released it under GPL3. The library is built upon pocketsphinx (sphinxbase) and gstreamer (for streaming recognition, which leads to fast results). It will allow you to attach python code to speech commands.
It's very accurate and dynamic for command parsing such as this, and I've tested it on the Pi already. Let me know if you have any issues.
To recognize few words on Raspberry Pi 2 with Python you can use Python bindings to Pocketsphinx
You can find pocketsphinx tutorial to get started here.
You can find some installation details for RPi here.
You can find code example here.
You can find already functioning example using pocketsphinx and python here.
Here is what I have up and running on my pi, it uses python speech recognition, pyaudio and pythons espeak for voice response (if you want that, if not just take it out) this will listen for voice input, print it to text and speak it back to you.. You can manipulate this to do whatever you want basically -
import pyaudio
from subprocess import call
import speech_recognition
r = sr.Recognizer()
r.energy_threshold=4000
with sr.Microphone(device_index = 2, sample_rate = 44100, chunk_size = 512) as source:
print 'listening..'
audio = r.listen(source)
print 'processing'
try:
message = (r.recognize_google(audio, language = 'en-us', show_all=False))
call(["espeak", message])
except:
call(['espeak', 'Could not understand you'])

How can I load and execute python script in C++ application

I want to extend my application, which is written in C++ using python scripts (extensions). I originally wanted to use TCL for that, just like they do in xchat, for example, but later I decided to use python, because it seems to be quite popular for whatever reasons.
However, I am failing to load and execute even very simple python script. I followed http://docs.python.org/2/extending/embedding.html
When I give a filename of script that I want to load as argument to pName, the error I get from PyErr_Print is: ImportError: Import by filename is not supported.
Reading the documentation, I figured I might need to run PyImport_ExecCodeModule, however this C function requires 2 arguments, 1 is char * (probably a name of module), other one is compiled python code, which according to docs I can get by calling python function compile(). Unfortunately it doesn't say how do I call this python function using C api's in my C++ code. Ideally I would imagine to do something like
PyObject *code = PyCompile("print (\"hello :)\"");
but I couldn't find any function like PyCompile, neither any other C-api function that would simply allowed me to execute python internal function (like compile) and grab its output as PyObject.
So, question is: how can I easily load a python script from a file (something.py) and execute it within my application using the embedded python interpretor?

Datalog require field `unlock'

While compiling an OCaml application I get the following error:
File "/tmp/ocamlpp466ee0", line 308, characters 34-233:
Error: Signature mismatch:
...
The field `unlock' is required but not provided
The field `lock' is required but not provided
Command exited with code 2.
My guess is that the error is releated with the OCaml library Datalog (I've installed the version 0.3 from here) because the line 308 in the file is /tmp/ocamlpp466ee0 the first one in the following code
module Logic = Datalog.Logic.Make(struct
type t = atom
let equal = eq_atom
let hash = hash_atom
let to_string a = Utils.sprintf "%a" pp_atom a
let of_string s = atom_of_json (Json.from_string s)
end)
I would really appreciate if someone could help me to know what I am doing wrong.
Moreover, I would like to undestand why the file /tmp/ocamlpp466ee0 is generated each time I execute 'make'? I tried to understand by reading the Makefile but I did not succeed.
I think that something have changed in Datalog library and in some version > 0.3 functor Datalog.Logic.Make requires module argument with values lock and unlock declared. So, it's version problem.
About temporary file. As you can see, its name consists of ocaml literal, pp which means preprocessor and some number. Preprocessors in OCaml usually work this way: they read input source file and write output source files. That's why some temporary files are created.

Libtorrent - Given a magnet link, how do you generate a torrent file?

I have read through the manual and I cannot find the answer. Given a magnet link I would like to generate a torrent file so that it can be loaded on the next startup to avoid redownloading the metadata. I have tried the fast resume feature, but I still have to fetch meta data when I do it and that can take quite a bit of time. Examples that I have seen are for creating torrent files for a new torrent, where as I would like to create one matching a magnet uri.
Solution found here:
http://code.google.com/p/libtorrent/issues/detail?id=165#c5
See creating torrent:
http://www.rasterbar.com/products/libtorrent/make_torrent.html
Modify first lines:
file_storage fs;
// recursively adds files in directories
add_files(fs, "./my_torrent");
create_torrent t(fs);
To this:
torrent_info ti = handle.get_torrent_info()
create_torrent t(ti)
"handle" is from here:
torrent_handle add_magnet_uri(session& ses, std::string const& uri add_torrent_params p);
Also before creating torrent you have to make sure that metadata has been downloaded, do this by calling handle.has_metadata().
UPDATE
Seems like libtorrent python api is missing some of important c++ api that is required to create torrent from magnets, the example above won't work in python cause create_torrent python class does not accept torrent_info as parameter (c++ has it available).
So I tried it another way, but also encountered a brick wall that makes it impossible, here is the code:
if handle.has_metadata():
torinfo = handle.get_torrent_info()
fs = libtorrent.file_storage()
for file in torinfo.files():
fs.add_file(file)
torfile = libtorrent.create_torrent(fs)
torfile.set_comment(torinfo.comment())
torfile.set_creator(torinfo.creator())
for i in xrange(0, torinfo.num_pieces()):
hash = torinfo.hash_for_piece(i)
torfile.set_hash(i, hash)
for url_seed in torinfo.url_seeds():
torfile.add_url_seed(url_seed)
for http_seed in torinfo.http_seeds():
torfile.add_http_seed(http_seed)
for node in torinfo.nodes():
torfile.add_node(node)
for tracker in torinfo.trackers():
torfile.add_tracker(tracker)
torfile.set_priv(torinfo.priv())
f = open(magnet_torrent, "wb")
f.write(libtorrent.bencode(torfile.generate()))
f.close()
There is an error thrown on this line:
torfile.set_hash(i, hash)
It expects hash to be const char* but torrent_info.hash_for_piece(int) returns class big_number which has no api to convert it back to const char*.
When I find some time I will report this missing api bug to libtorrent developers, as currently it is impossible to create a .torrent file from a magnet uri when using python bindings.
torrent_info.orig_files() is also missing in python bindings, I'm not sure whether torrent_info.files() is sufficient.
UPDATE 2
I've created an issue on this, see it here:
http://code.google.com/p/libtorrent/issues/detail?id=294
Star it so they fix it fast.
UPDATE 3
It is fixed now, there is a 0.16.0 release. Binaries for windows are also available.
Just wanted to provide a quick update using the modern libtorrent Python package: libtorrent now has the parse_magnet_uri method which you can use to generate a torrent handle:
import libtorrent, os, time
def magnet_to_torrent(magnet_uri, dst):
"""
Args:
magnet_uri (str): magnet link to convert to torrent file
dst (str): path to the destination folder where the torrent will be saved
"""
# Parse magnet URI parameters
params = libtorrent.parse_magnet_uri(magnet_uri)
# Download torrent info
session = libtorrent.session()
handle = session.add_torrent(params)
print "Downloading metadata..."
while not handle.has_metadata():
time.sleep(0.1)
# Create torrent and save to file
torrent_info = handle.get_torrent_info()
torrent_file = libtorrent.create_torrent(torrent_info)
torrent_path = os.path.join(dst, torrent_info.name() + ".torrent")
with open(torrent_path, "wb") as f:
f.write(libtorrent.bencode(torrent_file.generate()))
print "Torrent saved to %s" % torrent_path
If saving the resume data didn't work for you, you are able to generate a new torrent file using the information from the existing connection.
fs = libtorrent.file_storage()
libtorrent.add_files(fs, "somefiles")
t = libtorrent.create_torrent(fs)
t.add_tracker("http://10.0.0.1:312/announce")
t.set_creator("My Torrent")
t.set_comment("Some comments")
t.set_priv(True)
libtorrent.set_piece_hashes(t, "C:\\", lambda x: 0), libtorrent.bencode(t.generate())
f=open("mytorrent.torrent", "wb")
f.write(libtorrent.bencode(t.generate()))
f.close()
I doubt that it'll make the resume faster than the function built specifically for this purpose.
Try to see this code http://code.google.com/p/libtorrent/issues/attachmentText?id=165&aid=-5595452662388837431&name=java_client.cpp&token=km_XkD5NBdXitTaBwtCir8bN-1U%3A1327784186190
it uses add_magnet_uri which I think is what you need