zeroinitializer in global integer array in bitcode - llvm

I am trying to translate a global variable "int m[6];" to
#m = common global [6 x i32] zeroinitializer, align 16
For this, I have done the following. However, the following implementation generates the bitcode without "zeroinitializer". As a result of this, lli fails to run the generated bitcode.
auto arrType = ArrayType::get(
Type::getInt32Ty(TheContext),6);
TheModule->getOrInsertGlobal("m",arrType);
auto gvar = TheModule->getNamedGlobal("m");
gvar->setLinkage(GlobalVariable::CommonLinkage);
gvar->setAlignment(MaybeAlign(16));
#m = common global [6 x i32], align 16
My question is how to add "zeroinitializer" to the #m to make lli execute my generated bitcode.

gvar->setInitializer() works as amt suggests! The following is the implementation which produces "zeroinitializer" for my gvar.
auto arrType = ArrayType::get(Type::getInt32Ty(TheContext),6);
SmallVector<Constant*, 16> values;
for (auto i=0; i < iptr->i_numelem; i++)
values.push_back(llvm::Constant::getIntegerValue(
Builder.getInt32Ty(),APInt(32, 0)));
Constant *init = llvm::ConstantArray::get(arrType, values);
gvar->setInitializer(init);
However, I am getting the following build warning. I wonder how to get rid of this warning.
warning: invalid conversion from 'llvm::Type*' to 'llvm::ArrayType*'
[-fpermissive] 124 | Constant init =
llvm::ConstantArray::get(iptr->u.ltype, values);
| ~~~~~~~~^~~~~
| |
| llvm::Type

I realized that I made a downcasting mistake. I realized that the first type of the get() is ArrayType *. The warning message is gone now.

Related

Issue Creating Vector LLVM

I am trying to create a vector and return it as a value using llvm. Here is what my code looks like:
Value *ArrayAST::codeGen() {
Type *dType = Type::getDoubleTy(mContext);
Type *vectorType = VectorType::get(dType, 4);
Value *emptyVector = UndefValue::get(vectorType);
Constant *index0 = Constant::getIntegerValue(dType, llvm::APInt(32, 0));
Value *numberValue = numbers[0] -> codeGen(); // double 1.000000e+00
Value *fullVector = InsertElementInst::Create(emptyVector, numberValue, index0);
return fullVector;
}
This generates the following IR code:
define <4 x double> #x() {
entry:
ret <4 x double> <badref>
}
But, as you can see above, there is an issue: <badref>. And when I try to run it, it fails to build:
llvm-as: out.ll:3:21: error: expected type
ret <4 x double> <badref>
I am new to LLVM and I am not quite sure what the best way to fix this is.
Edit
If it is helpful, all of the code is on GitHub here.
Edit 2
If I am not mistaken (I totally could be) the IR code should look like this:
define <4 x double> #x() {
entry:
%tmp4 = insertelement <4 x double> undef, double 1.000000e+01, i32 0
ret <4 x double> %tmp4
}
The issue was that I did not add the vector to my builder. The following did the trick for me:
Builder.Insert(fullVector);

Local Variables in Dtrace

How do I access variables local to a function using dtrace?
For example, in the following snippet I would like to know the value of variable x using dtrace.
void foo(int a){
int x=some_fun(a);
}
Tracing local variables is impossible for kernel code because there is no mechanism to instrument arbitrary kernel instructions. Even in user-land, tracing local variables is somewhat convoluted and so, for the specific example you give, it would make a lot more sense to trace the return value of some_fun() instead.
If you must trace an arbitrary local variable then you will need to determine its location (typically a register or a location in memory) at the specific point of interest. For simple cases you may be able to do this by disassembling the function and inspecting the output. For more complex cases it may be helpful to build the object with DWARF and then find the DW_AT_location attribute of the local variable's DIE.
One you find the variable's location you'll need to express it in D; note that registers are exposed through the uregs[] array. Furthermore, you'll need to describe your probe using the offset within the function since dtrace(1) has no way of understanding line numbers. See the section on "User Process Tracing" in the Oracle Solaris Dynamic
Tracing Guide for more.
As an example, I wrote a trivial program containing
int
foo(int i)
{
int x;
...
for (x = 0; x < 10; x++)
i += 2;
and built it, as an amd64 executable, with DWARF...
cc -m64 -g -o demo demo.c
...before looking for foo() and its definition of x in the output
of dwarfdump demo:
< 1><0x000000e4> DW_TAG_subprogram
DW_AT_name "foo"
...
DW_AT_frame_base DW_OP_reg6
< 2><0x00000121> DW_TAG_variable
DW_AT_name "x"
...
DW_AT_location DW_OP_fbreg -24
x is described as DW_OP_fbreg -24 but DW_OP_fbreg itself must be
substituted by the result of the parent function's DW_AT_frame_base
attribute, i.e. DW_OP_reg6. DWARF uses its own architecture-agnostic
numbering for registers and the mapping to individual registers is up to
the appropriate standards body. In this case, the AMD64 ABI tells
us that DWARF register 6 corresponds to %rbp. Thus x is stored at
%rbp - 0x18. (For more about DWARF itself I recommend Michael Eager's
Introduction to the DWARF Debugging Format.)
Thus, if you had found that the line of source in which you're
interested is at offset 0x32 (perhaps by inspecting the DWARF
line table) then you might write a probe like:
pid$target:a.out:foo:32
{
self->up = (uintptr_t)(uregs[R_RBP] - 0x18);
self->kp = (int *)copyin(self->up, sizeof (int));
printf("x = %d\n", *self->kp);
self->up = 0;
self->kp = 0;
}
This is what I see when I run the demo program:
# dtrace -q -s test.d -c /tmp/demo
x = 1
x = 2
x = 3
x = 4
x = 5
x = 6
x = 7
x = 8
x = 9
x = 10
#

pointer to a variable is not updating variable when dereferenced

This is a debugging problem I've been trying to solve. I know the bit mask I need to apply to make b equal a. I inspected with gdb to find the difference between a and b. The variables b and a are char[] types and set prior to reaching the 'bug'.
#include <string.h>
int main() {
char a[1] = "a";
char b[1] = "b";
int *x;
x = (int *) b;
// bug in next line
*x = *x & 0xffffffff;
return memcmp(a, b, 1);
}
Until a equals b, I can't solve the problem. The only constraint given is that the bug is in the line noted, no other code is to be changed. There is no rule saying I can't add lines after the bug, though and before the memcmp().The issue I find is that nothing I do to the bit mask changes the value of b, ever. I've set breakpoints and inspected the value of x and *x before and after the bug, but x seems to not change.
Breakpoint 1, main () at test.c:9
9 *x = *x & 0xffffffff;
(gdb) print (int) a
$1 = -6922
(gdb) print (int) b
$2 = -6921
(gdb) print (int) x
$3 = -6921
(gdb) step
Breakpoint 2, main () at test.c:10
10 return memcmp(a, b, 1);
(gdb) print (int) a
$4 = -6922
(gdb) print (int) b
$5 = -6921
(gdb) print (int) x
$6 = -6921
I don't see how this can be solved the way requested, by modifying the constant in the line where the bug is. Any help to understand how to use x to update b using a bitwise mask would be appreciated.
x is a pointer; casting it to an int simply gives you the address as a decimal number.
a and b are both arrays, which will decay to a pointer when you do operations that require a pointer. By casting them to int you're again getting the address of the variable. The address doesn't change with the operation you're performing, even when the contents at that address changes.
Since a and b are both smaller than an int, your code is likely to mess up in ways that are extremely painful. Even if they were the right size, this isn't guaranteed to do the right thing.
You are trying to change the address of b but in
*x = *x & 0xffffffff;
You are changing the value because you are dereferencing x. Yyou need to apply the manipulation to x itself like
x = x & 0xffffffff;
And then you need to reassign x into b.
This will run afoul of the strict aliasing rules.

Bitwise and/or with ternary operator

Look at this tiny snippet.
y<v|!v?:y=v;
(y is minimum value, and v is current compared value. This way would make you think easier.)
This snippet’s meaning is simple.
If current value v is smaller then minimum value y, set new minimum value(y=v). But v=0 case is excluded.
Then I thought if the 'adverse code' could be generated, the result should be same. I mean,
y>v&v?y=v:;
This code should do same thing. But it cannot be compiled. The error is as follows.
error: expected expression
for(int v: a) v=abs(a[i]-v), x>v?:x=v, y>v&v?y=v:;
^
It’s weird. I think two codes are same each other. If latter ternary operator is erroneous, former should have same problem. But it didn’t.
Can someone explain why?
Next question.
I inserted a 0 to compile. y>v&v?y=v:0;
Then I got a false answer.
So I changed & to &&. y>v&&v?y=v:0;
Finally I got a right answer. But without these process, using | operator can do all. Why?
<additional info>
My compiler version is as follows.
$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.4.0
Thread model: posix
And compile option:
g++ -std=c++11 my.cpp
If you want to have a sample code to test, this would help.
#include <iostream>
#include <vector>
using namespace std;
int working(int i, vector<int> a) {
int y=INT_MAX;
for(int v: a) v=abs(a[i]-v), y<v|!v?:y=v;
return y;
}
int not_working(int i, vector<int> a) {
int y=INT_MAX;
for(int v: a) v=abs(a[i]-v), y>v&v?y=v:0;
return y;
}
int main() {
vector<int> b({-5,-2,2,7});
cout << working(2, b) << endl;
cout << not_working(2,b) << endl;
return 0;
}
(p.s. correction of my poor english is always welcomed)
The codes do not do the same thing, because the conditions are not the negation of each other. Try it with y == 3, v == 2:
y < v | !v => 3 < 2 | !2 => false | !true => false | false => 0 | 0 => 0
y > v & v => 3 > 2 & 2 => true & 2 => 1 & 2 => 0
In this snippet:
y<v|!v?:y=v;
the value of v is converted to a bool and negated with !. Because both sides of the bitwise or | are bools, | behaves like the logical or.
In the other case:
y>v&v?y=v:0;
there is no conversion to bool, and instead the result of y>v is converted to int. The bitwise and & gives varying results, depending on the lowest bit of v. It does not behave like the logical and.
This should work like the original:
y>v&!!v?y=v:0;
because there is a conversion to bool.
The syntax of the conditional operator is:
logical-expression ? true-expression : false-expression
In the first one,
y<v|!v ? : y=v;
you are missing true-expression. I get the following compiler warning with g++ when compiled with -Wall.
socc.cc: In function ‘int main()’:
socc.cc:14:12: warning: the omitted middle operand in ?: will always be ‘true’, suggest explicit middle operand [-Wparentheses]
y<v||!v?:y=v;
In the second one,
y>v&v ? y=v : ;
You are missing false-expression. For some reason, g++ treats this as an error instead of a warning.
You could fix that by providing a dummy value for both of them.
By the way, you are using bitwise operators | and &. I am sure that is a small error.
You can use:
(y<v || !v) ? 0 : y=v;
or
(y>v && v) ? y=v : 0;
Update
The expression
(y<v || !v) ? : y=v;
is not legal C/C++. It is supported by g++ as an extension. More can be seen at C conditional operator ('?') with empty second parameter.

LLVM, Initialize an integer global variable with value 0

I've been going in circles through the LLVM documentation / Stack Overflow and cannot figure out how an integer global variable should be initialized as 0 (first time using LLVM). This is some of my code currently:
TheModule = (argc > 1) ? new Module(argv[1], Context) : new Module("Filename", Context);
// Unrelated code
// currentGlobal->id is just a string
TheModule->getOrInsertGlobal(currentGlobal->id, Builder.getInt32Ty());
llvm::GlobalVariable* gVar = TheModule->getNamedGlobal(currentGlobal->id);
gVar->setLinkage(llvm::GlobalValue::CommonLinkage);
gVar->setAlignment(4);
// What replaces "???" below?
//gVar->setInitializer(???);
This almost does what I want, an example of output it can produce:
#a = common global i32, align 4
#b = common global i32, align 4
#c = common global i32, align 4
However, clang foo.c -S -emit-llvm produces this which I want as well:
#a = common global i32 0, align 4
#b = common global i32 0, align 4
#c = common global i32 0, align 4
As far as I can tell I need a Constant* where I have "???", but am not sure how to do it: http://llvm.org/docs/doxygen/html/classllvm_1_1GlobalVariable.html#a095f8f031d99ce3c0b25478713293dea
Use one of the APInt constructors to get a 0-valued ConstantInt (AP stands for Arbitrary Precision)
ConstantInt* const_int_val = ConstantInt::get(module->getContext(), APInt(32,0));
Then set your initializer value (a Constant subclass)
global_var->setInitializer(const_int_val);