I have a VC++ project which uses Lua 5.2 for scripting.
I am trying to implement MySQL compatibility into this project.
I do not own this project, so I would prefer to change as little source code as possible, if any at all.
I have downloaded and unzipped the files from this extension into the same base directory as the executable... and in my Main.lua file, I have added the line require('DBI') as stated to do so on this wiki page.
But when I run the application and execute the script, I get:
LUA Fail:
C:\Path\To\bin\DBI.lua:3: attempt to call global 'module' (a nil value)
After some light reading, I found out that the module function was depreciated in Lua 5.2...
But this extension, as well as other MySQL extensions, require the use of the module function.
So what is a workaround for this issue?
You may need to compile your Lua instance with LUA_COMPAT_MODULE; according to the source code: "LUA_COMPAT_MODULE controls compatibility with previous module functions 'module' (Lua) and 'luaL_register' (C)".
This won't be enough though as the module itself is written with Lua 5.1 API in mind. You can either try to find its Lua 5.2 version or use something like Peter Cawley's TwoFace that "allows a Lua 5.2 program to load most 5.1 C libraries without the need for any recompilation".
I used this as a quick and dirty way to get most of my scripting setup working with 5.2.
I didn't use module myself but have the likes of luasocket, copas etc in my stack.
I doubt this helps in your specific case but may be of some use more generally.
Essentially I replicated the C version of module in lua using the debug library to set the function environment. Not pretty but hey.
if not module then
function module(modname,...)
local function findtable(tbl,fname)
for key in string.gmatch(fname,"([%w_]+)") do
if key and key~="" then
local val = rawget(tbl,key)
if not val then
local field = {}
tbl[key]=field
tbl = field
elseif type(val)~="table" then
return nil
else
tbl = val
end
end
end
return tbl
end
assert(type(modname)=="string")
local value,modul = package.loaded[modname]
if type(value)~="table" then
modul = findtable(_G,modname)
assert(modul,"name conflict for module '"..modname.."'" )
package.loaded[modname] = modul
else
modul = value
end
local name = modul._NAME
if not name then
modul._M = modul
modul._NAME = modname
modul._PACKAGE = string.match(modname,"([%w%._]*)%.[%w_]*$")
end
local func = debug.getinfo(2,"f").func
debug.setupvalue(func,1,modul)
for _,f in ipairs{...} do f(modul) end
end
function package.seeall(modul)
setmetatable(modul,{__index=_G})
end
end
Related
I am trying to write Python 2.7 code that will
Dynamically load a list / array of modules from a config file at startup
Call functions from those modules. Those functions are also designated in a config file (maybe the same config file, maybe a different one).
The idea is my code will have no idea until startup which modules to load. And the portion that calls the functions will have no idea which functions to call and which modules those functions belong to until runtime.
I'm not succeeding. A simple example of my situation is this:
The following is abc.py, a module that should be dynamically loaded (in my actual application I would have several such modules designated in a list / array in a config file):
def abc_fcn():
print("Hello World!")
def another_fcn():
print("BlahBlah")
The following is the .py code which should load abc.py (my actual code would need to import the entire list / array of modules from the config file). Both this .py file and abc.py are in the same folder / directory. Please note comments next to each statement.
module_to_import = "abc" #<- Will normally come from config file
fcn_to_call = "abc.abc_fcn" #<- Will normally come from config file
__import__(module_to_import) #<- No error
print(help(module_to_import)) #<- Works as expected
eval(fcn_to_call)() #<- NameError: name 'abc' is not defined
When I change the second line to the following...
fcn_to_call = "abc_fcn"
...the NameError changes to "name 'abc_fcn' is not defined".
What am I doing wrong? Thanks in advance for the help!
__import__ only returns the module specified, it does not add it to the global namespace. So to accomplish what you want, save the result as a variable, and then dynamically retrieve the function that you want. That could look like
fcn_to_call = 'abc_fcn'
mod = __import__(module_to_import)
func = getattr(mod, fcn_to_call)
func()
On a side note, abc is the name of name of the Abstract Base Classes builtin Python module, although I know you were probably just using this an example.
You should assign the returning value of __import__ to a variable abc so that you can actually use it as a module.
abc = __import__(module_to_import)
How can I create a Java or Javascript JSON webservice to retrieve data from a simple properties file? My intention is to uses this as a global property storage for a Jenkins instance that runs many Unit tests. The master property file also needs to be capable of being manually edited and stored in source control.
I am just wondering what method people would recommend that would be the easiest for a junior level programmer like me. I need read capability at miniumum but, and if its not too hard, write capability also. Therefore, that means it is not required to be REST.
If something like this already exists in Java or Groovy, a link to that resource would be appreciated. I am a SoapUI expert but I am unsure if a mock service could do this sort of thing.
I found something like this in Ruby but I could not get it to work as I am not a Ruby programmer at all.
There are a multitude of Java REST frameworks, but I'm most familiar with Jersey so here's a Groovy script that gives a simple read capability to a properties file.
#Grapes([
#Grab(group='org.glassfish.jersey.containers', module='jersey-container-grizzly2-http', version='2.0'),
#Grab(group='org.glassfish.jersey.core', module='jersey-server', version='2.0'),
#Grab(group='org.glassfish.jersey.media', module='jersey-media-json-jackson', version='2.0')
])
import org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory
import org.glassfish.jersey.jackson.JacksonFeature
import javax.ws.rs.GET
import javax.ws.rs.Path
import javax.ws.rs.Produces
#Path("properties")
class PropertiesResource {
#GET
#Produces("application/json")
Properties get() {
new File("test.properties").withReader { Reader reader ->
Properties p = new Properties()
p.load(reader)
return p
}
}
}
def rc = new org.glassfish.jersey.server.ResourceConfig(PropertiesResource, JacksonFeature);
GrizzlyHttpServerFactory.createHttpServer('http://localhost:8080/'.toURI(), rc).start()
System.console().readLine("Press any key to exit...")
Unfortunately, since Jersey uses the 3.1 version of the asm library, there are conflicts with Groovy's 4.0 version of asm unless you run the script using the groovy-all embeddable jar (it won't work by just calling groovy on the command-line and passing the script). I also had to supply an Apache Ivy dependency. (Hopefully the Groovy team will resolve these in the next release--the asm one in particular has caused me grief in the past.) So you can call it like this (supply the full paths to the classpath jars):
java -cp ivy-2.2.0.jar:groovy-all-2.1.6.jar groovy.lang.GroovyShell restProperties.groovy
All you have to do is create a properties file named test.properties, then copy the above script into a file named restProperties.groovy, then run via the above command line. Then you can run the following in Unix to try it out.
curl http://localhost:8080/properties
And it will return a JSON map of your properties file.
In TypoScript exists the possibility to get the environment variable HTTP_COOKIE_VARS (which is deprecated):
10 = TEXT
10.data = global : HTTP_COOKIE_VARS | some_cookie
I got this from the documentation.
But on my server (PHP 5.3) this variable is empty! I suppose this is because this environment variable is deprecated. Now I am running out of options, without using an extension, user function or user condition.
Maybe you have an idea! Thanks in advance.
this should do the job (at least with TYPO3 4.5 and PHP 5.3.8):
10 = TEXT
10.data = global:_COOKIE|some_cookie
10.wrap = <h2>Cookie: |</h2>
Unfortunately, there is no built-in functionality for the $_COOKIE variable.
You can however write a hook which implements the tslib_content_getDataHook interface and register it via
$TYPO3_CONF_VARS['SC_OPTIONS']['tslib/class.tslib_content.php']['getData'][] = 'path/to/your/class.user_cookiehook.php';
[request.getCookieParams()['some_cookie'] == 'some_value']
Your Code
[end]
I'm working on embedding Python in our test suite application. The purpose is to use Python to run several tests scripts to collect data and make a report of tests. Multiple test scripts for one test run can create global variables and functions that can be used in the next script.
The application also provides extension modules that are imported in the embedded interpreter, and are used to exchange some data with the application.
But the user can also make multiple test runs. I don't want to share those globals, imports and the exchanged data between multiple test runs. I have to be sure I restart in a genuine state to control the test environment and get the same results.
How should I reinitialise the interpreter?
I used Py_Initialize() and Py_Finalize(), but get an exception on the second run when initialising a second time the extension modules I provide to the interpreter.
And the documentation warns against using it more than once.
Using sub-interpreters seems to have the same caveats with extension modules initialization.
I suspect that I'm doing something wrong with the initialisation of my extension modules, but I fear that the same problem happens with 3rd party extension modules.
Maybe it's possible to get it to work by launching the interpreter in it's own process, so as to be sure that all the memory is released.
By the way, I'm using boost-python for it, that also warns AGAINST using Py_Finalize!
Any suggestion?
Thanks
Here is another way I found to achieve what I want, start with a clean slate in the interpreter.
I can control the global and local namespaces I use to execute the code:
// get the dictionary from the main module
// Get pointer to main module of python script
object main_module = import("__main__");
// Get dictionary of main module (contains all variables and stuff)
object main_namespace = main_module.attr("__dict__");
// define the dictionaries to use in the interpreter
dict global_namespace;
dict local_namespace;
// add the builtins
global_namespace["__builtins__"] = main_namespace["__builtins__"];
I can then use use the namespaces for execution of code contained in pyCode:
exec( pyCode, global_namespace, lobaca_namespace );
I can clean the namespaces when I want to run a new instance of my test, by cleaning the dictionaries:
// empty the interpreters namespaces
global_namespace.clear();
local_namespace.clear();
// Copy builtins to new global namespace
global_namespace["__builtins__"] = main_namespace["__builtins__"];
Depending at what level I want the execution, I can use global = local
How about using code.IteractiveInterpreter?
Something like this should do it:
#include <boost/python.hpp>
#include <string>
#include <stdexcept>
using namespace boost::python;
std::string GetPythonError()
{
PyObject *ptype = NULL, *pvalue = NULL, *ptraceback = NULL;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
std::string message("");
if(pvalue && PyString_Check(pvalue)) {
message = PyString_AsString(pvalue);
}
return message;
}
// Must be called after Py_Initialize()
void RunInterpreter(std::string codeToRun)
{
object pymodule = object(handle<>(borrowed(PyImport_AddModule("__main__"))));
object pynamespace = pymodule.attr("__dict__");
try {
// Initialize the embedded interpreter
object result = exec( "import code\n"
"__myInterpreter = code.InteractiveConsole() \n",
pynamespace);
// Run the code
str pyCode(codeToRun.c_str());
pynamespace["__myCommand"] = pyCode;
result = eval("__myInterpreter.push(__myCommand)", pynamespace);
} catch(error_already_set) {
throw std::runtime_error(GetPythonError().c_str());
}
}
I'd write another shell script executing the sequence of test scripts with new instances of python each time. Or write it in python like
# run your tests in the process first
# now run the user scripts, each in new process to have virgin env
for script in userScript:
subprocess.call(['python',script])
I'm looking for a simple example code for C++\IronPython integration, i.e. embedding python code inside a C++, or better yet, Visual C++ program.
The example code should include: how to share objects between the languages, how to call functions\methods back and forth etc...
Also, an explicit setup procedure would help too. (How to include the Python runtime dll in Visual Studio etc...)
I've found a nice example for C#\IronPython here, but couldn't find C++\IronPython example code.
UPDATE - I've written a more generic example (plus a link to a zip file containing the entire VS2008 project) as entry on my blog here.
Sorry, I am so late to the game, but here is how I have integrated IronPython into a C++/cli app in Visual Studio 2008 - .net 3.5. (actually mixed mode app with C/C++)
I write add-ons for a map making applicaiton written in Assembly. The API is exposed so that C/C++ add-ons can be written. I mix C/C++ with C++/cli. Some of the elements from this example are from the API (such as XPCALL and CmdEnd() - please just ignore them)
///////////////////////////////////////////////////////////////////////
void XPCALL PythonCmd2(int Result, int Result1, int Result2)
{
if(Result==X_OK)
{
try
{
String^ filename = gcnew String(txtFileName);
String^ path = Assembly::GetExecutingAssembly()->Location;
ScriptEngine^ engine = Python::CreateEngine();
ScriptScope^ scope = engine->CreateScope();
ScriptSource^ source = engine->CreateScriptSourceFromFile(String::Concat(Path::GetDirectoryName(path), "\\scripts\\", filename + ".py"));
scope->SetVariable("DrawingList", DynamicHelpers::GetPythonTypeFromType(AddIn::DrawingList::typeid));
scope->SetVariable("DrawingElement", DynamicHelpers::GetPythonTypeFromType(AddIn::DrawingElement::typeid));
scope->SetVariable("DrawingPath", DynamicHelpers::GetPythonTypeFromType(AddIn::DrawingPath::typeid));
scope->SetVariable("Node", DynamicHelpers::GetPythonTypeFromType(AddIn::Node::typeid));
source->Execute(scope);
}
catch(Exception ^e)
{
Console::WriteLine(e->ToString());
CmdEnd();
}
}
else
{
CmdEnd();
}
}
///////////////////////////////////////////////////////////////////////////////
As you can see, I expose to IronPython some objects (DrawingList, DrawingElement, DrawingPath & Node). These objects are C++/cli objects that I created to expose "things" to IronPython.
When the C++/cli source->Execute(scope) line is called, the only python line
to run is the DrawingList.RequestData.
RequestData takes a delegate and a data type.
When the C++/cli code is done, it calls the delegate pointing to the
function "diamond"
In the function diamond it retrieves the requested data with the call to
DrawingList.RequestedValue() The call to DrawingList.AddElement(dp) adds the
new element to the Applications visual Database.
And lastly the call to DrawingList.EndCommand() tells the FastCAD engine to
clean up and end the running of the plugin.
import clr
def diamond(Result1, Result2, Result3):
if(Result1 == 0):
dp = DrawingPath()
dp.drawingStuff.EntityColor = 2
dp.drawingStuff.SecondEntityColor = 2
n = DrawingList.RequestedValue()
dp.Nodes.Add(Node(n.X-50,n.Y+25))
dp.Nodes.Add(Node(n.X-25,n.Y+50))
dp.Nodes.Add(Node(n.X+25,n.Y+50))
dp.Nodes.Add(Node(n.X+50,n.Y+25))
dp.Nodes.Add(Node(n.X,n.Y-40))
DrawingList.AddElement(dp)
DrawingList.EndCommand()
DrawingList.RequestData(diamond, DrawingList.RequestType.PointType)
I hope this is what you were looking for.
If you don't need .NET functionality, you could rely on embedding Python instead of IronPython. See Python's documentation on Embedding Python in Another Application for more info and an example. If you don't mind being dependent on BOOST, you could try out its Python integration library.