Inline io wait using MASM - c++

How to convert this to use VC++ and MASM
static __inline__ void io_wait(void)
{
asm volatile("jmp 1f;1:jmp 1f;1:");
}
I know asm changes to __asm and we remove the volatile but whats next?
I am trying to create the function to place in the code below
#define PIC1 0x20
#define PIC2 0xA0
#define PIC1_COMMAND PIC1
#define PIC1_DATA (PIC1+1)
#define PIC2_COMMAND PIC2
#define PIC2_DATA (PIC2+1)
#define PIC_EOI 0x20
#define ICW1_ICW4 0x01 /* ICW4 (not) needed */
#define ICW1_SINGLE 0x02 /* Single (cascade) mode */
#define ICW1_INTERVAL4 0x04 /* Call address interval 4 (8) */
#define ICW1_LEVEL 0x08 /* Level triggered (edge) mode */
#define ICW1_INIT 0x10 /* Initialization - required! */
#define ICW4_8086 0x01 /* 8086/88 (MCS-80/85) mode */
#define ICW4_AUTO 0x02 /* Auto (normal) EOI */
#define ICW4_BUF_SLAVE 0x08 /* Buffered mode/slave */
#define ICW4_BUF_MASTER 0x0C /* Buffered mode/master */
#define ICW4_SFNM 0x10 /* Special fully nested (not) */
void remap_pics(int pic1, int pic2)
{
UCHAR a1, a2;
a1=ReadPort8(PIC1_DATA);
a2=ReadPort8(PIC2_DATA);
WritePort8(PIC1_COMMAND, ICW1_INIT+ICW1_ICW4);
io_wait();
WritePort8(PIC2_COMMAND, ICW1_INIT+ICW1_ICW4);
io_wait();
WritePort8(PIC1_DATA, pic1);
io_wait();
WritePort8(PIC2_DATA, pic2);
io_wait();
WritePort8(PIC1_DATA, 4);
io_wait();
WritePort8(PIC2_DATA, 2);
io_wait();
WritePort8(PIC1_DATA, ICW4_8086);
io_wait();
WritePort8(PIC2_DATA, ICW4_8086);
io_wait();
WritePort8(PIC1_DATA, a1);
WritePort8(PIC2_DATA, a2);
}

I think you'll have better luck by telling us what you're trying to do with this code. Neither of the platforms supported by VC++ will wait for IO completion by executing an unconditional jump.
Nevertheless, given your example, I see several problems you need to address first:
"1f" needs to have a suffix indicating that it's hexadecimal. In VC++ you can use either C-style (0x1f) or assembly style (1fh) suffixes in inline assembly
it seems that you've got two "1" labels. Besides the fact that two labels of the same name are going to collide, I believe VC++ doesn't support label names containing only digits
1fh is a strange address to jump to. In Real mode it's IRQ area, in Protected mode it's inside the first page, which most of the OSes keep not-present to catch NULL dereference.
Barring that, your code can be translated to VC++ should look like this:
__asm {
jmp 1fh
a1:
jmp 1fh
b1:
}
But this will not get you anything useful. So please state what you're trying to accomplish

Seems GNU gas syntax, jmp 1f means jump to label 1 forward.
static __inline__ void io_wait(void)
{
#ifdef __GNUC__
asm volatile("jmp 1f;1:jmp 1f;1:");
#else
/* MSVC x86 supports inline asm */
__asm {
jmp a1
a1:
jmp b1
b1:
}
#endif
}

Related

Improper operand type MSVC

Currently trying to emit a random instruction from a method but keep getting the error "Improper operand type".
#include <iostream>
#include <time.h>
#define PUSH 0x50
#define POP 0x58
#define NOP 0x90
auto generate_instruction() -> int {
int instruction_list[] = { NOP };
return instruction_list[rand() % (sizeof(instruction_list) / sizeof(*instruction_list))];
}
#define JUNK_INSTRUCTION(x) \
__asm _emit PUSH \
__asm _emit x \
__asm _emit POP \
#define JUNK JUNK_INSTRUCTION(generate_instruction)
int main() {
srand(static_cast<int>(time(NULL)));
JUNK;
std::cout << "Hello World!" << std::endl;
}
However when I replace #define JUNK JUNK_INSTRUCTION(generate_instruction) with #define JUNK JUNK_INSTRUCTION(NOP) , the program runs fine. I'm unsure as to why it's not working when they both return the same value.
Not sure what you are trying to do.
JUNK expands to JUNK_INSTRUCTION(generate_instruction), which will expand to:
__asm _emit PUSH
__asm _emit generate_instruction
__asm _emit POP
generate_instruction is simply the name of a function. The compiler is not going to run the function and replace just because you name it.
According to the docs, you need to provide a constant byte value, like you do with the other two.
I think you are really confused with the concepts of run-time calls, compile-time computation and macros.

Incorrect hex addition on memory mapped address and offsets in C++

Let's start with what I am able to get working and then what is not working, and then hopefully the community can help me figure out why.
Let's say I have the following class:
#define BASE 0x20200000 /* base GPIO address in memory mapped IO*/
#define GPSET0 0x1c /* Pin Output Set */
#define GPCLR0 0x28 /* Pin Output Clear */
class LedDriver
{
void set_bit(int value)
{
if (value) {
reg = GPSET0;
} else {
reg = GPCLR0;
}
// set bit 15 based on the register being used
volatile int* address = ((volatile int*) (BASE+reg);
*(address) |= (1 << 15);
}
}
This works fine, and the end result is the LED blinking on and off. Great!
Now I want to use class members for the same, such as:
class LedDriver
{
// some class members
volatile const int BASE = 0x20200000; /* base GPIO address in memory mapped IO*/
volatile const int GPSET0 = 0x1c; /* Pin Output Set */
volatile const int GPCLR0 = 0x28; /* Pin Output Clear */
void set_bit(int value)
{
if (value) {
reg = GPSET0;
} else {
reg = GPCLR0;
}
// set bit 15 based on the register being used
volatile int* address = ((volatile int*) (BASE+reg);
*(address)|= (1 << 15);
}
}
But this does not work. I am not experienced enough in C++ to know why, but I do think it has to do with the hex addition or somehow the variable is being treated differently than the macro.
Is my assumption correct? If so, is there a way to make it work, or is it not intended to work in this way?
One final note: I am deliberately avoiding any stdlib functions right now with the compiler flag -nostdlib for the purposes of learning.

C++ Macro Arguments w/ Token Concatenation?

I have a bunch of labeled servos, each one has its own calibrated min, mid and max pulse-width value.
// repository of calibrated servo pulse width values:
#define SERVO_0x01_MIN 165
#define SERVO_0x01_MID 347
#define SERVO_0x01_MAX 550
#define SERVO_0x02_MIN 165
#define SERVO_0x02_MID 347
#define SERVO_0x02_MAX 550
...
To simplify maintenance of the code, swapping a servo should only require changing a single macro definition value.
// maps certain positions on robot to the servo that is installed there
#define JOINT_0 0x02
#define JOINT_1 0x05
#define JOINT_2 0x0A
...
// function-like macros to resolve values from mapping
#define GET_MIN(servo) SERVO_##servo##_MIN
#define GET_MID(servo) SERVO_##servo##_MID
#define GET_MAX(servo) SERVO_##servo##_MAX
The problem I'm having is that calling a function-like macro with an argument that itself is a macro does not resolve to its terminal value:
// main
int main(void) {
// this works
int max_0x01 = GET_MAX(0x01); // int max_0x01 = 550;
// this doesn't
int max_joint_0 = GET_MAX(JOINT_0); // int max_joint_0 = SERVO_JOINT_0_MAX;
}
What can I do to make GET_MAX(JOINT_0) turn into 550 ?
#define GET_MAX(servo) GET_MAX2(servo)
#define GET_MAX2(servo) SERVO_##servo##_MAX
The preprocessor will perform expansion (textual replacement) upon a variadic macro until it can expand no further. So passing in JOINT_0, such as GET_MAX(JOINT_0) will expand to
GET_MAX2(0x02)
This gets further expanded to
SERVO_0x02_MAX
And finally replaced with the #define value 550

Compilation error: expected constructor, destructor, or type conversion before ‘;’ token [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I am new in here, posting because I have read several posts that have help me.I know you may regard this post as another duplicate, while it's not. It's not a duplicate because my code is different than others. Here is my code as:
#include "bcm2835.h"
#include <cmath>
#include <iostream>
using namespace std;
// COMMANDS
#define WAKEUP 0x02
#define SLEEP 0x04
#define RESET 0x06
#define START 0x09
#define STOP 0x0a
#define RDATAC 0x10
#define SDATAC 0x11
#define RDATA 0x12
#define OFSCAL 0x18
#define GANCAL 0x19
#define RREG1 0x20
#define WREG1 0x40
#define RREG2 0x08
#define WREG2 0x08
// REGISTERS
#define CONFIG0 0x85
#define CONFIG1 0x10 // checksum is kept off
#define CONFIG2 0x15 //10SPS data rate and Gate control mode
#define OFC0 0x00
#define OFC1 0x00
#define OFC2 0x00
#define FSC0 0x00
#define FSC1 0x00
#define FSC2 0x40
#define NUM 1024
int nobytes;
int S = 50;
int i,flag = 1;
int j, k, factor, converged, count = 0;
char status = LOW;
char txbuffer[11], rxbuffer[4], dummy;
float xhat, xhat_m, P_m, L , K, last_xhat_converged, xhat_converged = 0.0;
float P = 1.0;
float R = 0.01;
float Q, mean, variance = 0;
float current, lastreading, current_test[50];
double key, startkey;
float X1[4096];
float X2[4096];
float Xf1[4096];
float Xf2[4096];
float v[4096];
float xf[4096];
float c[65536];
float ys[65536];
spi_start();
initialise();
void spi_start()
{
bcm2835_init();
//cout << "The SPI mode is starting";
// INITIAL SETUP OF THE SPI DEVICE
bcm2835_spi_begin(); // Setup the SPI0 port on the RaspberryPi
bcm2835_spi_chipSelect(BCM2835_SPI_CS0); // Assert the chip select
bcm2835_spi_setBitOrder(BCM2835_SPI_BIT_ORDER_MSBFIRST); // Set the Bit order
bcm2835_spi_setChipSelectPolarity(BCM2835_SPI_CS0, LOW); // Set the the chip select to be active low
bcm2835_spi_setClockDivider(BCM2835_SPI_CLOCK_DIVIDER_64); // Set the clock divider, SPI speed is 3.90625MHz
bcm2835_spi_setDataMode(BCM2835_SPI_MODE1); // Set the Data mode
//cout << "The SPI mode has been started";
}
void initialise()
{
// INITIAL RESET OF THE CHIP
nobytes = 1;
txbuffer[0] = RESET;
bcm2835_spi_writenb(txbuffer, nobytes);
bcm2835_delay(100); //no accurate timing required
// WRITING OF THE CONTROL AND THE CALIBRATION REGISTERS
nobytes = 11;
txbuffer[0] = WREG1;
txbuffer[1] = WREG2;
txbuffer[2] = CONFIG0;
txbuffer[3] = CONFIG1;
txbuffer[4] = CONFIG2;
txbuffer[5] = OFC0;
txbuffer[6] = OFC1;
txbuffer[7] = OFC2;
txbuffer[8] = FSC0;
txbuffer[9] = FSC1;
txbuffer[10]= FSC2;
bcm2835_spi_writenb(txbuffer, nobytes);
bcm2835_delay(100); //no accurate timing required
}
suggestions will be appreciated.
Your issue is that you are confusing the compiler:
spi_start();
initialize();
Are function calls and not function declarations.
Please include the return types:
void spi_start();
void initialise();

Flushing denormalised numbers to zero

I've scoured the web to no avail.
Is there a way for Xcode and Visual C++ to treat denormalised numbers as 0? I would have thought there's an option in the IDE preferences to turn on this option but can't seem to find it.
I'm doing some cross-platform audio stuff and need to stop certain processors hogging resources.
Cheers
You're looking for a platform-defined way to set FTZ and/or DAZ in the MXCSR register (on x86 with SSE or x86-64); see https://stackoverflow.com/a/2487733/567292
Usually this is called something like _controlfp; Microsoft documentation is at http://msdn.microsoft.com/en-us/library/e9b52ceh.aspx
You can also use the _MM_SET_FLUSH_ZERO_MODE macro: http://msdn.microsoft.com/en-us/library/a8b5ts9s(v=vs.71).aspx - this is probably the most cross-platform portable method.
For disabling denormals globally I use these 2 macros:
//warning these macros has to be used in the same scope
#define MXCSR_SET_DAZ_AND_FTZ \
int oldMXCSR__ = _mm_getcsr(); /*read the old MXCSR setting */ \
int newMXCSR__ = oldMXCSR__ | 0x8040; /* set DAZ and FZ bits */ \
_mm_setcsr( newMXCSR__ ); /*write the new MXCSR setting to the MXCSR */
#define MXCSR_RESET_DAZ_AND_FTZ \
/*restore old MXCSR settings to turn denormals back on if they were on*/ \
_mm_setcsr( oldMXCSR__ );
I call the first one at the beginning of the process and the second at the end.
Unfortunately this seems to not works well on Windows.
To flush denormals locally I use this
const Float32 k_DENORMAL_DC = 1e-25f;
inline void FlushDenormalToZero(Float32& ioFloat)
{
ioFloat += k_DENORMAL_DC;
ioFloat -= k_DENORMAL_DC;
}
See update (4 Aug 2022 at the end of this entry
To do this, use the Intel Intrinsics macros during program startup. For example:
#include <immintrin.h>
int main() {
_MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON);
}
In my version of MSVC, this emitted the following assembly code:
stmxcsr DWORD PTR tv805[rsp]
mov eax, DWORD PTR tv805[rsp]
bts eax, 15
mov DWORD PTR tv807[rsp], eax
ldmxcsr DWORD PTR tv807[rsp]
MXCSR is the control and status register, and this code is setting bit 15, which turns flush zero mode on.
One thing to note: this only affects denormals resulting from a computation. If you want to also set denormals to zero if they're used as input, you also need to set the DAZ flag (denormals are zero), using the following command:
_MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON);
See https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-setting-the-ftz-and-daz-flags for more information.
Also note that you need to set MXCSR for each thread, as the values contained are local to each thread.
Update 4 Aug 2022
I've now had to deal with ARM processors as well. The following is a cross-platform macro that works on ARM and Intel:
#ifndef __ARM_ARCH
extern "C" {
extern unsigned int _mm_getcsr();
extern void _mm_setcsr(unsigned int);
}
#define MY_FAST_FLOATS _mm_setcsr(_mm_getcsr() | 0x8040U)
#else
#define MY_FPU_GETCW(fpcr) __asm__ __volatile__("mrs %0, fpcr" : "=r"(fpcr))
#define MY_FPU_SETCW(fpcr) __asm__ __volatile__("msr fpcr, %0" : : "r"(fpcr))
#define MY_FAST_FLOATS \
{ \
uint64_t eE2Hsb4v {}; /* random name to avoid shadowing warnings */ \
MY_FPU_GETCW(eE2Hsb4v); \
eE2Hsb4v |= (1 << 24) | (1 << 19); /* FZ flag, FZ16 flag; flush denormals to zero */ \
MY_FPU_SETCW(eE2Hsb4v); \
} \
static_assert(true, "require semi-colon after macro with this assert")
#endif