My compiler won't work with an assembly file I have and my other compiler that will won't work with the c files I have. I don't understand assembly. I need to move this over but I'm not getting anywhere fast. Is there someone out there that can help? I can't believe there isn't a translator available. Here is the beginning of the file:
list p=18F4480
#include <p18F4480.inc>
#define _Z STATUS,2
#define _C STATUS,0
GLOBAL AARGB0,AARGB1,AARGB2,AARGB3
GLOBAL BARGB0,BARGB1,BARGB2,BARGB3
GLOBAL ZARGB0,ZARGB1,ZARGB2
GLOBAL REMB0,REMB1
GLOBAL TEMP,TEMPB0,TEMPB1,TEMPB2,TEMPB3
GLOBAL LOOPCOUNT,AEXP,CARGB2
LSB equ 0
MSB equ 7
math_data UDATA
AARGB0 RES 1
AARGB1 RES 1
AARGB2 RES 1
AARGB3 RES 1
BARGB0 RES 1
BARGB1 RES 1
BARGB2 RES 1
BARGB3 RES 1
REMB0 RES 1
REMB1 RES 1
REMB2 RES 1
REMB3 RES 1
TEMP RES 1
TEMPB0 RES 1
TEMPB1 RES 1
TEMPB2 RES 1
TEMPB3 RES 1
ZARGB0 RES 1
ZARGB1 RES 1
ZARGB2 RES 1
CARGB2 RES 1
AEXP RES 1
LOOPCOUNT RES 1
math_code CODE
;---------------------------------------------------------------------
; 24-BIT ADDITION
_24_BitAdd
GLOBAL _24_BitAdd
movf BARGB2,w
addwf AARGB2,f
movf BARGB1,w
btfsc _C
incfsz BARGB1,w
addwf AARGB1,f
movf BARGB0,w
btfsc _C
incfsz BARGB0,w
addwf AARGB0,f
return
I get that I can largly exclude the first two lines as the device defines are in my main.c anyway. The two #defines are just that, but the simplest way (I think) is to just replace instances of _Z and _C with STATUS,2 and STATUS,0 accordingly. The next lines (the GLOBALs) are simply variable declarations I'm gathering. Same with LSB and MSB except they also assign values. The next chunck I think I just declare a bunch of integers with those names (AARGB0, etc) and then the chunck after that is a function.
I don't even bother to translate that function, because my compiler has #asm/#end asm directives so I can put it in raw (as long as its wrapped in a function).
I think I have it all... until I build and my compiler screams about STATUS not being defined. And of course its not. But what is it? I read on the net that STATUS registers are something special but I really don't get how it works.
If you haven't noticed, I'm not even sure what it is I'm really asking. I just want the bloody thing to work.
Your compilers are refusing your source?
Either you are using broken tools, or your source files are buggy. In both cases, your problem is not "translating ASM to C" or something like that, but the bugs in your source / toolchain. Do not try to work around problems, solve them.
STATUS is a built-in register in the PIC architecture, that implements the same functionality as the "status" or "condition" flags register in most larger CPU:s. It contains bitflags that are set and cleared by the microcontroller as it executes code, telling you the result of operations.
The Z flag, for instance, is set whenever an arithmetic operation results in a zero, and the C (carry) flag is set when arithmetic overflow is detected.
These are flags that are typically not visible from C, as C doesn't want to require that the host processor even has status bits, so directly translating this code to C will be hard. You will need to figure out a way to include the status-reading bit tests in the C code, and use those instructions when possible. This might be troublesome, as from C you have less control over which registers are being used, which in turn might make it hard to make sure you're checking the proper flags in the right place(s).
Here are a few links to other people's extended precision PIC code. Most seem to remain in assembly, but they might still be useful as references or inspiration.
You can try to reverse-engineer the disassembly. But what will you learn?
You should be able to compile your assembly (using compiler 1) into an object file, and link that to the object file compiled by compiler 2 from your C file.
Related
Assuming x is a 8bit unsigned integer, what is the most efficient command to set the last two bits to 01 ?
So regardless of the initial value it should be x = ******01 in the final state.
In order to set
the last bit to 1, one can use OR like x |= 00000001, and
the forelast bit to 0, one can use AND like x &= 11111101 which is ~(1<<1).
Is there an arithmetic / logical operation that can be use to apply both operations at the same time?
Can this be answered independently of programm-specific implementation but pure logical operations?
Of course you can put both operation in one instructions.
x = (x & 0b11111100) | 1;
That at least saves one assignment and one memory read/write. However, most likely the compilers may optimize this anyway, even if put into two instructions.
Depending on the target CPU, compilers may even optimize the code into bit manipulation instructions that directly can set or reset single bits. And if the variable is locally heavily used, then most likely its kept in register.
So at the end the generated code may look at as simple as (pseudo asm):
load x
setBit 0
clearBit 1
store x
however it also may be compiled into something like
load x to R1
load immediate 0b11111100 to R2
and R1, R2
load immediate 1 to R2
or R1, R2
store R1 to x
For playing with things like this you may throw a look at compiler explorer https://godbolt.org/z/sMhG3YTx9
Please try removing the -O2 compiler option and see the difference between optimized and non-optimized code. Also you may try to switch to different cpu architectures and compilers
This is not possible with a dyadic bitwise operator. Because the second operand should allow to distinguish between "reset to 0", "set to 1" or "leave unchanged". This cannot be encoded in a single binary mask.
Instead, you can use the bitwise ternary operator and form the expression
x = 0b11111100 ??? x : 0b00000001.
Anyway, a piece of caution: this operator does not exist.
I was wondering how an if statement or conditional statements works behind the scenes when executed.
Consider an example like this:
if (10 > 6) {
// Some code
}
How does the compiler or interpreter knows that the number 10 is greater than 6 or 6 is less than 10?
At some point near the end of the compilation, the compiler will convert the above in to assembly language similar to:
start: # This is a label that you can reference
mov ax, 0ah # Store 10 in the ax register
mov bx, 06h # Store 6 in the bx register
cmp ax, bx # Compare ax to bx
jg inside_the_brackets # 10 > 6? Goto `inside_the_brackets`
jmp after_the_brackets # Otherwise skip ahead a little
inside_the_brackets:
# Some code - stuff inside the {} goes here
after_the_brackets:
# The rest of your program. You end up here no matter what.
I haven't written in assembler in years so I know that's a jumble of different varieties, but the above is the gist of it. Now, that's an inefficient way to structure the code, so a smart compiler might write it more like:
start: # This is a label that you can reference
mov ax, 0ah # Store 10 in the ax register
mov bx, 06h # Store 6 in the bx register
cmp ax, bx # Compare ax to bx
jle after_the_brackets # 10 <= 6? Goto `after_the_brackets`
inside_the_brackets:
# Some code - stuff inside the {} goes here
after_the_brackets:
# The rest of your program. You end up here no matter what.
See how that reversed the comparison, so instead of if (10 > 6) it's more like if (10 <= 6)? That removes a jmp instruction. The logic is identical, even if it's no longer exactly what you originally wrote. There -- now you've seen an "optimizing compiler" at work.
Every compiler you're likely to have heard of has a million tricks to convert code you write into assembly language that acts the same, but that the CPU can execute more efficiently. Sometimes the end result is barely recognizable. Some of the optimizations are as simple as what I just did, but others are fiendishly clever and people have earned PhDs in this stuff.
Kirk Strauser answer is correct. However you ask:
How does the compiler or interpreter knows that the number 10 is greater than 6 or 6 is less than 10?
Some optimizer compilers can see that 10 > 6 is a constant expression equivalent to true, and not emit any check or jump at all. If you are asking how they do that, well…
I'll explain the process in steps that hopefully are easy to understand. I'm covering no advanced topics.
The build process will start by parsing your code document according to the syntax of the language.
The syntax of the language will define how to interpret the text of the document (think a string with your code) as a series of symbols or tokens (e.g. keywords, literals, identifiers, operators…). In this case we have:
a if symbol.
a ( symbol.
a 10 symbol.
a > symbol".
a 6 symbol.
a ) symbol.
a { symbol.
and a } symbol.
I'm assuming comments, newlines and white-space do not generate symbols in this language.
From the series of symbols, it will build a tree-like memory structure (see AST) according to the rules of the language.
The tree will say that your code is:
An "if statement", that has two children:
A conditional (a boolean expression), which is a greater than comparison that has two children:
A constant literal integer 10
A constant literal integer 6
A body (a set of statements), in this case empty.
Then the compiler can look at that tree and figure out how to optimize it, and emit code in the target language (let us say machine code).
The optimization process will see that the conditional does not have variables, it is composed entirely of constants that are known at compile time. Thus it can compute the equivalent value and use that. Which leaves us with this:
An "if statement", that has two children:
A conditional (a boolean expression), which is a literal true.
A body (a set of statements), in this case empty.
Then it will see that we have a conditional that is always true, and thus we don't need it. Thus it replaces the if statement with the set of statements in its body. Which are none, we have optimized the code away to nothing.
You can imagine how the process would then go over the tree, figuring out what is the equivalent in the target language (again, let us say, machine code), and emitting that code until it has gone over the whole tree.
I want to mention that intermediate languages and JIT (just in time) compiling has become very common. See Understanding the differences: traditional interpreter, JIT compiler, JIT interpreter and AOT compiler.
My description of how the build process works is a toy textbook example. I would like to encourage to learn further of the topic. I'll suggest, in this order:
Computerphile Compilers with Professor Brailsford video series.
The good old Dragon
Book [pdf], and other books such as "How To Create Pragmatic, Lightweight Languages" and "Parsing with Perl 6 Regexes and Grammars".
Finally CS 6120: Advanced Compilers: The
Self-Guided Online
Course
which is not about parsing, because it presumes you already know
that.
The ability to actually check that is implemented in hardware. To be more specific, it will subtract 10-6 (which is one of the basic instructions that processors can do), and if the result is less than or equal to 0 then it will jump to the end of the block (comparing numbers to zero and jumping based on the result are also basic instructions). If you want to learn more, the keywords to look for are "instruction set architecture", and "assembly".
I wanted to have a macro (or anything else that works) that can go through the C/C++ file, and count the number of occurrences of a specific string (in the physical C/C++ file).
#define numInFile(str) [???]
int main() {
printf("blahblah");
printf("You've used printf %d times", numinFile ("printf") - 2); //-2 account for this call
return 0;
}
Edit: Question was originally specific to using this functionality for exit calls. It is now generalize for any use.
If I understand you correctly, you want to have unique error codes, that you can trace back to the line where the error happened?
I will address that Y question instead of your X one:
You can use __LINE__. __LINE__ expands to an integer constant of the current line number. You could #define quit as:
#define quit(code) (quit)(__LINE__+(code))
void (quit)(code) { // seperate func in case you want to do more
exit(code);
}
Keep in mind though that the exit code of a process is not the best way to encode such information. On POSIX, only the lower 8 bit of an exit code are guaranteed to be available. But as you already use 300 as base value, I assume you are on Windows or some other system where this isn't a concern.
For debugging purposes, alternatively consider writing to stderr, when an error happens (maybe with a command line flag).
If exit was just an example, and you intend to use it inside your application, you could save __LINE__ and __FILE__ in global (or _Thread_local) variables on error and store only the exit reason in the error code.
Regarding your X question, the preprocessor doesn't do such stuff. You will have to offload such tasks to a shell/perl/whatever script that your build script can call.
There's nothing built-in to do this. It would be possible to hook something up to your build system to generate a header file with the relevant counts and use a macro to pull the right value from that header file.
However, based on the fact that various unix systems put limits on the range of exit values (the linux machine I am looking at will only use the lowest 8 bits, meaning that exit(256) will be identical to exit(0)) you probably actually don't want to do this in the first place, you'd be better off using a logging macro that emits the name of the compilation unit, the line where it was expanded and then uses exit(EXIT_FAILURE).
We have an assignment where we need to profile a 'simple instruction' (addition or bit-wise and for example). This means performing the same operation a large number of times (100K+) and measuring the average time in microseconds. The result should be presented in cycle-lengths: (totalTime/iterations)*cphMHz.
So, results may vary but all in all we were told that we should get a result close to 1 cycle-length. Actual result doesn't matter as long as programming is correct.
My question is: what is a good operation to profile?
There are two points I need to concider:
I use loop unrolling to be a bit more accurate, so in each iteration I perform 10 simple instruction. This means I have to choose an operation to wouldn't be performed only once due to compiler optimization (we can't use -o0 flag as school staff does not).
Bad example: var = i; - the compiler would only perform the last command.
What is a real 'simple instruction'? How do I know the number of operations that are actually performed? I tried reading the assembly output, but I couldn't understand it.
Hope I was clear enough, any idea would be great.
Thanks anyway
P.S don't know if it matters but I write in CPP
1) This sounds (to me) like an impossible task, if optimizations are (or might be) enabled. You can never be sure on what the compiler will do during optimizations. I'd definitely do something like reusing the previous result. If allowed to/possible, I'd try to include a raw assembler snippet to be profiled (so you can be sure there's no additional overhead; although it still could be optimized).
2) As for instructions: One assembler command is one instruction. E.g. a += i will - depending on available instruction set and stuff - most likely result in 4 instructions: read a, read i, add, write a. Reading assembly is pretty much straightforward. Depending on the instruction set/processor, there might be different "directions" for reading (i.e. "from -> to"). x86 assemblers (and those for most other common processors) will prefer instruction target, source, while DSPs prefer to use instruction source, target. Just important to know: moving data has to happen through registers. So even a single assignment like a = b will result in two instructions (b to register and register to a).
In general, if this answer goes into the wrong direction, try to elaborate a bit more on your specific task and its requirements (e.g. which compiler is to be used) and drop me a short comment.
I'm having difficulty understanding the role constraints play in GCC inline assembly (x86). I've read the manual, which explains exactly what each constraint does. The problem is that even though I understand what each constraint does, I have very little understanding of why you would use one constraint over another, or what the implications might be.
I realize this is a very broad topic, so a small example should help narrow the focus. The following is a simple asm routine which just adds two numbers. If an integer overflow occurs, it writes a value of 1 to an output C variable.
int32_t a = 10, b = 5;
int32_t c = 0; // overflow flag
__asm__
(
"addl %2,%3;" // Do a + b (the result goes into b)
"jno 0f;" // Jump ahead if an overflow occurred
"movl $1, %1;" // Copy 1 into c
"0:" // We're done.
:"=r"(b), "=m"(c) // Output list
:"r"(a), "0"(b) // Input list
);
Now this works fine, except I had to arbitrarily fiddle with the constraints until I got it to work correctly. Originally, I used the following constraints:
:"=r"(b), "=m"(c) // Output list
:"r"(a), "m"(b) // Input list
Note that instead of a "0", I use an "m" constraint for b. This had a weird side effect where if I compiled with optimization flags and called the function twice, for some reason the result of the addition operation would also get stored in c. I eventually read about "matching constraints", which allows you to specify that a variable is to be used as both an input and output operand. When I changed "m"(b) to "0"(b) it worked.
But I don't really understand why you would use one constraint over another. I mean yeah, I understand that "r" means the variable should be in a register and "m" means it should be in memory - but I don't really understand what the implications of choosing one over another are, or why the addition operation doesn't work correctly if I choose a certain combination of constraints.
Questions: 1) In the above example code, why did the "m" constraint on b cause c to get written to? 2) Is there any tutorial or online resource which goes into more detail about constraints?
Here's an example to better illustrate why you should choose constraints carefully (same function as yours, but perhaps written a little more succinctly):
bool add_and_check_overflow(int32_t& a, int32_t b)
{
bool result;
__asm__("addl %2, %1; seto %b0"
: "=q" (result), "+g" (a)
: "r" (b));
return result;
}
So, the constraints used were: q, r, and g.
q means only eax, ecx, edx, or ebx could be selected. This is because the set* instructions must write to an 8-bit-addressable register (al, ah, ...). The use of b in the %b0 means, use the lowest 8-bit portion (al, cl, ...).
For most two-operand instructions, at least one of the operands must be a register. So don't use m or g for both; use r for at least one of the operands.
For the final operand, it doesn't matter whether it's register or memory, so use g (general).
In the example above, I chose to use g (rather than r) for a because references are usually implemented as memory pointers, so using an r constraint would have required copying the referent to a register first, and then copying back. Using g, the referent could be updated directly.
As to why your original version overwrote your c with the addition's value, that's because you specified =m in the output slot, rather than (say) +m; that means the compiler is allowed to reuse the same memory location for input and output.
In your case, that means two outcomes (since the same memory location was used for b and c):
The addition didn't overflow: then, c got overwritten with the value of b (the result of the addition).
The addition did overflow: then, c became 1 (and b might become 1 also, depending on how the code was generated).