I experiment with the D language, and find the std.variant cool.
But then I find it confusing.
For example,
import std.stdio;
void main()
{
string x = "";
x ~= 'v';
writeln(x);
}
This does what is expected.
But this,
import std.stdio;
import std.variant;
void main()
{
Variant x = "";
x ~= 'v';
writeln(x);
}
gives a hard-to-decipher RUNTIME error.
Why is that?
Here is the message.
std.variant.VariantException#std/variant.d(1675): Variant: attempting to use incompatible types char and immutable(char)[]
----------------
??:? inout #property inout(immutable(char)[]) std.variant.VariantN!(32uL).VariantN.get!(immutable(char)[]).get() [0x106f84654]
??:? long std.variant.VariantN!(32uL).VariantN.handler!(immutable(char)[]).handler(std.variant.VariantN!(32uL).VariantN.OpID, ubyte[32]*, void*) [0x106f7559d]
??:? std.variant.VariantN!(32uL).VariantN std.variant.VariantN!(32uL).VariantN.opOpAssign!("~", char).opOpAssign(char) [0x106f84a1d]
??:? _Dmain [0x106f74d8d]
This seems like a bug in std.variant.Variant. The problem here is char and immutable(char) are two different types, and Variant doesn't know that they're closely related. There are two different workaround to this: Either make x hold a char[] instead of immutable(char)[] (also known as string), or make 'v' an immutable(char):
import std.stdio;
import std.variant;
void main()
{
Variant x = "".dup; // Duplicate the string to make it a heap-allocated, mutable string.
x ~= 'v';
writeln(x);
}
or
import std.stdio;
import std.variant;
void main()
{
Variant x = "";
x ~= cast(immutable)'v'; // Make 'v' explicitly immutable.
writeln(x);
}
Related
If I create a dup function inside my D program, I can no longer use the dup property of an array. This code works
import std.stdio;
void main() {
double[] v = [0.1, 0.2, 0.3];
writeln(v.dup);
}
But this code returns "clash.d(9): Error: function clash.dup (double x) is not callable using argument types (double[])"
import std.stdio;
double dup(double x) {
return x;
}
void main() {
double[] v = [0.1, 0.2, 0.3];
writeln(v.dup);
}
How can I define a dup function in my program and not lose the dup property of arrays?
alias it into the local overload set:
alias dup = object.dup;
right below your own dup function definition.
Well, basically that's what I need :
I've got an extern(al) char * variable
I want to assign the value of a D string
Code :
import std.stdio;
import std.string;
import core.stdc.stdlib;
extern (C) int yyparse();
extern (C) extern __gshared FILE* yyin;
extern (C) extern __gshared char* yyfilename;
void main(string[] args)
{
string filename = args[1];
auto file = File(filename, "r");
yyfilename = toStringz(filename);
yyin = file.getFP();
yyparse();
}
However, the toStringz function returns this error :
main.d(15): Error: cannot implicitly convert expression (toStringz(filename)) of type immutable(char)* to char*
Any idea what's going wrong?
The problem is that yyfilename, and the return value of toStringz when it is passed a string, have different const qualifiers. filename is immutable (D string is an alias to immutable(char)[]), however yyfilename does not have any const qualifier, and is thus mutable.
You have two options:
If you know that yyfilename will not bemodified elsewhere in your program, you should declare it as const(char)* instead of char*.
Otherwise, you should create a copy of filename when converting it: toUTFz!(char*)(filename).
You can use:
import std.utf;
void main()
{
string filename;
char* yyfilename;
yyfilename = toUTFz!(char*)(filename);
// yyfilename = filename.toUTFz!(char*); // or with UFCS syntax
}
I extended the example provided in this answer by adding a private member variable, and printing it in the bar() function:
#include <iostream>
class Foo{
private:
double m;
public:
Foo() { m = 2.344; };
void bar(){
std::cout << "Hello, number is " << m << std::endl;
}
};
extern "C" {
Foo* Foo_new(){ return new Foo(); }
void Foo_bar(Foo* foo){ foo->bar(); }
}
The ctypes wrapper was unchanged and is:
from ctypes import *
lib = cdll.LoadLibrary('./libfoo.so')
class Foo(object):
def __init__(self):
self.obj = lib.Foo_new()
def bar(self):
lib.Foo_bar(self.obj)
f = Foo()
f.bar()
When I run the python code (after having already compiled the C++ code earlier), I am getting a segmentation fault that I have narrowed down to the printing of m in bar().
The seg fault does not happen
in the original code
if i remove printing of m but keep it as a variable
if i substitute m with any fixed number in bar().
I am really puzzled why this should be happening. As this is an experiment to learn ctypes, any help would be appreciated.
If you're using 64-bit Python, you need to define the restype and argtypes. Otherwise ctypes defaults to casting the values to a 32-bit C int.
from ctypes import *
lib = CDLL('./libfoo.so')
lib.Foo_new.argtypes = []
lib.Foo_new.restype = c_void_p
lib.Foo_bar.argtypes = [c_void_p]
lib.Foo_bar.restype = None
Here are source links for 2.7.5, Modules/_ctypes/callproc.c:
ConvParam (Lines 645-663)
GetResult (Lines 914-915)
For 64-bit Windows a C long is 32-bit, but it's 64-bit on most other 64-bit platforms. By forcing int the result is at least consistent.
I'm writing some code that returns a column value from a row based on the column index. The row is updated from time to time in C++, and I'd like the python code to keep a reference to the column value. The following code reflects my current solution, in which I have to repeatedly fetch the column value.
struct Foo
{
PyObject * get(int pos)
{
// Position 0 is an integer value. Position 1 is a float value
if (pos == 0)
return Py_BuildValue ("i", m_int);
else
return Py_BuildValue ("f", m_float);
}
void set_int(int i)
{
m_int = i;
}
void set_float(float f)
{
m_float = f;
}
int m_int;
float m_float;
};
My bindings are simple:
class_<Foo> ("Foo")
.def("get", &Foo::get)
.def("set_int", &Foo::set_int)
.def("set_float", &Foo::set_float)
;
This works at the python level like this:
In [16]: foo = Foo()
In [17]: foo.set_int(1)
In [18]: foo.set_float(2.5)
In [19]: i = foo.get(0)
In [20]: f = foo.get(1)
In [21]: i
Out[21]: 1
In [22]: type(i)
Out[22]: int
In [23]: f
Out[23]: 2.5
In [24]: type(f)
Out[24]: float
So far, so good. However, when I modify foo, I'd like i and f to reflect the new values. Currently, they reflect the old values.
In [25]: foo.set_int(42)
In [26]: i
Out[26]: 1
How do I setup the C++ and binding code so that 'get' returns a reference to rather than a copy of the Foo member variables?
You need to wrap the integer values in an object class. Primitives are immutable in Python so when you return an int value it will never reference the original int, just a const copy. You could return a new struct of the form:
struct IntWrap {
int *value;
int get() { return *value; }
void set_int(int nval) { *value = nval; }
...
};
class_<IntWrap > ("IntWrap")
.def("get", &IntWrap::get)
.set("set_int", &IntWrap::set_int)
...
and make wrapper function bindings that treat the struct as a mutable integer in Python. This however, does violate the constness idiom of primitives in Python.
Here's a post on effectively inheriting from str or int, where you could make changes to the integer style object to use IntWrap instead (in case you want something that acts more like an int out of the box).
I've closed-source C++ library, which provides header files with code equivalent to:
class CSomething
{
public:
void getParams( unsigned char & u8OutParamOne,
unsigned char & u8OutParamTwo ) const;
private:
unsigned char u8OutParamOne_,
unsigned char u8OutParamTwo_,
};
I'm trying to expose that to Python, my wrapper code is something like this:
BOOST_PYTHON_MODULE(MySomething)
{
class_<CSomething>("CSomething", init<>())
.def("getParams", &CSomething::getParams,(args("one", "two")))
}
Now I'm trying to use that in Python, which fails horribly:
one, two = 0, 0
CSomething.getParams(one, two)
Which results in:
ArgumentError: Python argument types in
CSomething.getParams(CSomething, int, int)
did not match C++ signature:
getParams(CSomething {lvalue}, unsigned char {lvalue} one, unsigned char {lvalue} two)
What do I need to change either in the Boost.Python wrapper code or Python code to make it work? How do I add some Boost.Python magic to automatically cast PyInt to unsigned char and vice-versa?
Boost.Python is complaining about a missing lvalue parameter, a concept which does not exist in Python:
def f(x):
x = 1
y = 2
f(y)
print(y) # Prints 2
The x paramter of the f function is not a C++-like reference. In C++ the output is different:
void f(int &x) {
x = 1;
}
void main() {
int y = 2;
f(y);
cout << y << endl; // Prints 1.
}
You have a few choices here:
a) Wrap the CSomething.getParams function to return a tuple of the new parameters values:
one, two = 0, 0
one, two = CSomething.getParams(one, two)
print(one, two)
b) Wrap the CSomething.getParams function to accept a class instance as parameter:
class GPParameter:
def __init__(self, one, two):
self.one = one
self.two = two
p = GPParameter(0, 0)
CSomething.getParams(p)
print(p.one, p.two)