'PointerType' object has no attribute 'type' - llvm

I'm trying convert this raw LLVM code:
%UnwEx = type { i64, i8*, i64, i64 }
#UnwEx.size = constant i64 ptrtoint (%UnwEx* getelementptr (%UnwEx* null, i32 1) to i64)
To the llvmlite:
unw_ex = context.get_identified_type("UnwEx")
unw_ex.elements = [int_, i8_ptr, int_, int_]
i32 = ir.IntType(32)
gep = builder.gep(ir.PointerType(unw_ex), [i32(1)], name="gep")
... # some extra code to cast gep to integer
The idea is get the size of the structured type UnwEx in runtime. However the last line with get command raise this:
Traceback (most recent call last):
File "C:/Users/DavidRagazzi/Desktop/mamba/mamba/core/runtime.py", line 495, in <module>
generate(None, 64, 8)
File "C:/Users/DavidRagazzi/Desktop/mamba/mamba/core/runtime.py", line 280, in generate
gep = builder.gep(unw_ex_ptr, [i32(1)], name="gep")
File "C:\Program Files\Python37\lib\site-packages\llvmlite\ir\builder.py", line 925, in gep
inbounds=inbounds, name=name)
File "C:\Program Files\Python37\lib\site-packages\llvmlite\ir\instructions.py", line 494, in __init__
typ = ptr.type
AttributeError: 'PointerType' object has no attribute 'type'
What's wrong with this? Is there a better way to get the size of type?

The first argument to gep should be a value that is pointer to an instance UnwEx. What you're passing is a type, not a value. The way you get that pointer is by allocating memory using alloca or malloc (in which case you'd have to bitcast the returned pointer to UnwEx*).
Another way of getting UnwEx's size is by using its get_abi_size member. Something like
target = llvmlite.binding.Target.from_default_triple()
target_machine = target.create_target_machine()
unw_ex_size = unw_ex.get_abi_size(target_machine.target_data)
gives you the size of in bytes an instance of UnwEx would occupy on your host platform.

Related

Problems with autograd.hessian_vector_product and scipy.optimize.NonlinearConstraint

I'm trying to run a minimization problem using scipy.optimize, including a NonlinearConstraint. I really don't want to code derivatives myself, so I'm using autograd to do it. But even though I follow the exact same procedure for the arguments to minimize and to NonlinearConstraint, the first seems to work and the second doesn't.
Here's my MWE:
useconstraint = False
import autograd
import autograd.numpy as np
from scipy import optimize
def function(x): return x[0]**2 + x[1]**2
functionjacobian = autograd.jacobian(function)
functionhvp = autograd.hessian_vector_product(function)
def constraint(x): return np.array([x[0]**2 - x[1]**2])
constraintjacobian = autograd.jacobian(constraint)
constrainthvp = autograd.hessian_vector_product(constraint)
constraint = optimize.NonlinearConstraint(constraint, 1, np.inf, constraintjacobian, constrainthvp)
startpoint = [1, 2]
bounds = optimize.Bounds([-np.inf, -np.inf], [np.inf, np.inf])
print optimize.minimize(
function,
startpoint,
method='trust-constr',
jac=functionjacobian,
hessp=functionhvp,
constraints=[constraint] if useconstraint else [],
bounds=bounds,
)
When I turn useconstraint off (at the top), it works fine and minimizes at (0, 0) as expected. When I turn it on, I get the following error:
Traceback (most recent call last):
File "test.py", line 29, in <module>
bounds=bounds,
File "/home/heshy/.local/lib/python2.7/site-packages/scipy/optimize/_minimize.py", line 613, in minimize
callback=callback, **options)
File "/home/heshy/.local/lib/python2.7/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py", line 336, in _minimize_trustregion_constr
for c in constraints]
File "/home/heshy/.local/lib/python2.7/site-packages/scipy/optimize/_constraints.py", line 213, in __init__
finite_diff_bounds, sparse_jacobian)
File "/home/heshy/.local/lib/python2.7/site-packages/scipy/optimize/_differentiable_functions.py", line 343, in __init__
self.H = hess(self.x, self.v)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/wrap_util.py", line 20, in nary_f
return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/differential_operators.py", line 24, in grad
vjp, ans = _make_vjp(fun, x)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/core.py", line 10, in make_vjp
end_value, end_node = trace(start_node, fun, x)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/tracer.py", line 10, in trace
end_box = fun(start_box)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/wrap_util.py", line 15, in unary_f
return fun(*subargs, **kwargs)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/differential_operators.py", line 88, in vector_dot_grad
return np.tensordot(fun_grad(*args, **kwargs), vector, np.ndim(vector))
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/tracer.py", line 44, in f_wrapped
ans = f_wrapped(*argvals, **kwargs)
File "/home/heshy/.local/lib/python2.7/site-packages/autograd/tracer.py", line 48, in f_wrapped
return f_raw(*args, **kwargs)
File "/home/heshy/.local/lib/python2.7/site-packages/numpy/core/numeric.py", line 1371, in tensordot
raise ValueError("shape-mismatch for sum")
ValueError: shape-mismatch for sum
What am I doing wrong? I think the issue is in the hessian_vector_product because I see hess in the error message, but I'm not sure about that.
Ok, I found the answer. This was very confusing.
The hessp argument to minimize expects a function that returns the "Hessian of objective function times an arbitrary vector p" (source). By contrast, the hess argument to NonlinearConstraint expects "A callable [that] must return the Hessian matrix of dot(fun, v)" (source).
If you interpret the first quote like I did, "Hessian of (objective function times an arbitrary vector p)", it means pretty much the same thing as "Hessian matrix of dot(fun, v)". I therefore assumed you could use the same autograd function for both.
However, the correct interpretation is "(Hessian of objective function) times an arbitrary vector p", which is completely different. The hessian_vector_product function in autograd gives the correct result for the first, but you need a different function for the second.

ValueError: setting an array element with a sequence with numpy operation

I have a list which looks like this,
data_raw=[[], [7944, -11896, 3376, 1627, -850, -3991], [8688, -12192, 1936,1404, -616, -3536], [6540, -11800, 1608, 3021, 780, -1061], [6804, -11864, 3828, 4310, 552, -2343], [7208, -12544, 3768, 2542, 286, 1264], [7048, -14532, 6824, 2528, 1577, 2583], [6112, -17376, 10180, 132, -1716, 1001], [7576, -21140, 6796, -1725, 1657, 1980], [2928, -31716, 15400, -5945, 824, -3558], [8940, -24016, 11540, -12047,-5574, -16019], [12020, -17516, 3744, -14637, 1521, -14791], [8916, -16160, 5860, -14122, -3793, -13597], [10144, -8124, 1076, -12027, -1194, -8809], [8088, -7264, 928, -18441, -2058, -80], [7684, -4896, -5224, -9800, 2427, 2054], [2040, -7776, -3520, -9306, 4442, 1276], [6240, -7340, -7216, -1757, -3630, -2734], [5720, -3940, -4632, -901, 1469, -1682], [5244, -4676, -5648, 2720, 3526, -436], [4016, -5336, -2976, 4280, 4543, -1562], [4028, -5156, -5560, 7391, 5000, -1317], [748, -9800, -2144, 10353, 6616, -3390], [10268, -7220, 1844, 11657, 8566, -4740], [11300, -10752, 4508, 11666, 10771, -1356], [16792, -10180, 24476, 13474, 2828, -5205], [19208, -10908, 6636, 9747, 10501, 1676], [7540, -20480, 13248, 8715, 12607, 7017], [15780, -20832, 11600, 5686, 4737, -3654], [18004, -20072, 17716, 1082, 2218, -3181], [16516, -18528, 14568, -3931, -5457, -4260], [15596, -12596,9084, -7735, -8646, -4221], [13296, -8948, 6316, -9215, -8260, -3225], [10860, -8124, 6116, -7264, -7653, -678], [7968, -7828, 5384, -8604, -7043, 1076], [8008, -5316, 1816, -6457, -7414, -50], [9304, -3568, 1092, -4895, -4654, 3123], [9560, -3932, -352, -904, -6369, 1981], [14692, -3168, 836, 2406, -8099, 3121], [13088, -6292, 44, 5503, -11759, 6405], [11892, -8316, -836, 6159, -8673, 10130], [8252, -13220, -1064, 8279, -7906, 12090], [3572, -18392, -1536, 5995, -2719, 10667], [2864, -19576, 960, 6207, -4501, 6554], [1024, -20140, -1964, 7834, -10817, 5197]]
When i use this code:
data = np.array(data_raw).astype(float)
I got an error:
>> Traceback (most recent call last):
data = np.array(data_raw).astype(float)
ValueError: setting an array element with a sequence.
Does anyone know why this error occurred?
The first element of your 2D list is an empty list, causing this issues. If you remove it or just use
data = np.array(data_raw[1:]).astype(float)
it will work as intended.

How to generate metadata for LLVM IR?

I am trying to generate a metadata for the LLVM IR i have generated. I want to generate a metadata of the form :
!nvvm.annotations = !{!0}
!0 = metadata !{void ()* #foo, metadata !"kernel", i32 1}
Where foo is a function in my LLVM IR. Right now I am only able to generate a metadata of the form:
!nvvm.annotations = !{!0}
!0 = !{!"kernel"}
I used the following code for the above metadata generation.
char metaDataArgument[512];
sprintf(metaDataArgument, "%s", pipelineKernelName);
llvm::NamedMDNode *nvvmMetadataNode = LLVMModule->getOrInsertNamedMetadata("nvvm.annotations");
llvm::MDNode *MDNOdeNVVM = llvm::MDNode::get(*context, llvm::MDString::get(*context, "kernel"));
nvvmMetadataNode->addOperand(MDNOdeNVVM);
Could someone tell me how to modify the above code to generate metadata of the required form
Your metadata will be a tuple with 3 elements.
The first one is a global value, which is wrapped when insert in the metadata hierarchy as "ValueAsMetadata" (we can use the Constant subclass since GlobalValues are constant).
The second is a MDString, you got this one.
The last one is wrapped as a ConstantAsMetadata.
This should look approximately like the follow:
SmallVector<Metadata *, 32> Ops; // Tuple operands
GlobalValue *Foo = Mod.getNamedValue("foo");
if (!Foo) report_fatal_error("Expected foo..");
Ops.push_back(llvm::ValueAsMetadata::getConstant(Foo));
Ops.push_back(llvm::MDString::get(*context, "kernel"));
// get constant i32 1
Type *I32Ty = Type::getInt32Ty(*context);
Contant *One = ConstantInt::get(I32Ty, 1);
Ops.push_back(llvm::ValueAsMetadata::getConstant(One));
auto *Node = MDTuple::get(Context, Ops);

type matching error in theano [ Cannot convert Type Generic (of Variable <Generic>) into Type TensorType]

thisfile.py
import cPickle
import gzip
import os
import numpy
import theano
import theano.tensor as T
def load_data(dataset):
f = gzip.open(dataset, 'rb')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
def shared_dataset(data_xy, borrow=True):
data_x, data_y = data_xy
shared_x = theano.shared(numpy.asarray(data_x,
dtype=theano.config.floatX),
borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y,
dtype=theano.config.floatX),
borrow=borrow)
return shared_x, T.cast(shared_y, 'int32')
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y),
(test_set_x, test_set_y)]
return rval
class PCA(object):
def __init__(self):
self.param = 0
def dimemsion_transform(self, X):
m_mean = T.mean(X, axis=0)
X = X - m_mean ##################### this line makes error
return X
if __name__ == '__main__':
dataset = 'mnist.pkl.gz'
# load the MNIST data
data = load_data(dataset)
X = T.matrix('X')
m_pca = PCA()
transform = theano.function(
inputs=[],
outputs=m_pca.dimemsion_transform(X),
givens={
X: data
}
)
error showing like below
Traceback (most recent call last):
File ".../thisfile.py", line 101, in <module>
X: data
File ".../Theano/theano/compile/function.py", line 322, in function
output_keys=output_keys)
File ".../Theano/theano/compile/pfunc.py", line 443, in pfunc
no_default_updates=no_default_updates)
File ".../Theano/theano/compile/pfunc.py", line 219, in rebuild_collect_shared
cloned_v = clone_v_get_shared_updates(v, copy_inputs_over)
File ".../Theano/theano/compile/pfunc.py", line 93, in clone_v_get_shared_updates
clone_v_get_shared_updates(i, copy_inputs_over)
File ".../Theano/theano/compile/pfunc.py", line 93, in clone_v_get_shared_updates
clone_v_get_shared_updates(i, copy_inputs_over)
File ".../Theano/theano/compile/pfunc.py", line 93, in clone_v_get_shared_updates
clone_v_get_shared_updates(i, copy_inputs_over)
File ".../Theano/theano/compile/pfunc.py", line 96, in clone_v_get_shared_updates
[clone_d[i] for i in owner.inputs], strict=rebuild_strict)
File ".../Theano/theano/gof/graph.py", line 242, in clone_with_new_inputs
new_inputs[i] = curr.type.filter_variable(new)
File ".../Theano/theano/tensor/type.py", line 234, in filter_variable
self=self))
TypeError: Cannot convert Type Generic (of Variable <Generic>) into Type TensorType(float64, matrix). You can try to manually convert <Generic> into a TensorType(float64, matrix).
I am making PCA function with theano but have a problem.
mean value is subtracted from MNIST data in dimension_transform in PCA class
I do not get why it gives type matching error and how do I fix it
Your problem comes from these lines:
data = load_data(dataset)
Here data is a list (as this is what load_data() returns).
transform = theano.function(
inputs=[],
outputs=m_pca.dimemsion_transform(X),
givens={
X: data
}
)
And here you pass it as a value. You have to extract the item you want from the return value of load_data() like so:
[(train_set_x, train_set_y), (valid_set_x, valid_set_y),
(test_set_x, test_set_y)] = load_data(dataset)
and then use
givens={
X: train_set_x
}
or one of the other values.

Error 'bool' object has no attribute 'any'

I wrote a script to do interpolation
import scipy.interpolate
import csv
inputfile1 = 'test.csv'
outputfile = 'Day1_out.csv'
distance_list = []
EC_list = []
new_dist_list=[]
outfile = open(outputfile,'w')
outfile.write('Distance,EC\n')
with open (inputfile1,'rb') as csvfile:
f1 = csv.reader(csvfile,delimiter=',')
next(f1) #skip header line
for row in f1:
dist = row[12]
EC=row[13]
distance_list.append(dist)
EC_list.append(EC)
y_interp = scipy.interpolate.interp1d(distance_list,EC_list)
new_dist = 561.7
end = 560.2
while new_dist>end:
new_dist_list.append(dist)
new_dist=new_dist-0.2
for distance in new_dist_list:
EC=y_interp(distance)
outfile.write(str(distance)+','+str(EC)+'\n')
outfile.close()
When I ran the script it gave me the error message
Traceback (most recent call last):
File "D:\14046\Scripts\interpolation_RoR.py", line 41, in <module>
EC=y_interp(distance)
File "C:\Python27\lib\site-packages\scipy\interpolate\polyint.py", line 54, in __call__
y = self._evaluate(x)
File "C:\Python27\lib\site-packages\scipy\interpolate\interpolate.py", line 448, in _evaluate
out_of_bounds = self._check_bounds(x_new)
File "C:\Python27\lib\site-packages\scipy\interpolate\interpolate.py", line 474, in _check_bounds
if self.bounds_error and below_bounds.any():
AttributeError: 'bool' object has no attribute 'any'
Anyone has any idea where I have errors?
BTW, the input file have these values for distance and EC
Distance,EC
561.8,450
561.78,446
561.7,444
561.2,440
561.02,438
560.5,437
560.1,435
Thanks,
We are getting the same error message here. I think this does not necessarily need to be a problem with your code.
In our case switching to SciPy version 0.15.0 instead of 0.13.x solves the problem.
So it looks like the current version of SciPy accepts a wider range of input values.