VirtualBox crashing when assembly "sti" is executed when testing Operating System - c++

I'm currently attempting to build my own Operating System and have run into an issue when trying to test out my kernel code using VirtualBox.
The real issue arises when I call the assembly instruction sti as I'm currently attempting to implement an interrupt descriptor table and communicate with the PICs.
Here is the code that calls it. It's a function called kernel_main that is called from another assembly file. That file simply sets up the stack before executing any code from the OS, but there hasn't been any issues there, and everything works fine until I add the instruction asm("sti"); to the following code:
/* main function of our kernal
* accepts the pointer to multiboot and the magic code (no particular reason to take the magic number)
*
* use extern C to prevent gcc from changing the name
*/
extern "C" void kernel_main(void *multiboot_structure, uint32_t magic_number)
{
// can't use the standard printf as we're outside an OS currently
// we don't have access to glibc so we write our own printf
printf_boot_message("kernel.....\n");
// create the global descriptor table
GlobalDescriptorTable gdt;
// create the interrupt descriptor table
InterruptHandler interrupt_handler(&gdt);
// enable interrupts (test)
asm("sti"); // <- causes crash
// random debug printf
printf_boot_message("sti called\n");
// kernal never really stops, inf loop
while (1)
;
}
Below is the virtual box debug output, I've googled around for VINF_EM_TRIPLE_FAULT but mostly found RAM related issues that I don't think apply to me. The printf calls in the above code execute as expected followed by the VM immediately crashing stating the following:
Link to output as it's too large to post here: https://pastebin.com/jfPfhJUQ
Here is my interrupt handling code:
* Implementations of the interrupt handling routines in sys_interrupts.h
*/
#include "sys_interrupts.h"
#include "misc.h"
//handle() is used to take the interrupt number,
//i_number, and the address to the current CPU stack frame.
uint32_t InterruptHandler::handle(uint8_t i_number, uint32_t crnt_stkptr)
{
// debug
printf(" INTERRUPT");
// after the interrupt code has been executed,
// return the stack pointer so that the CPU can resume
// where it left off.
// this works for now as we do not have multiple
// concurrent processes running, so there is no issue
// of handling the threat number.
return crnt_stkptr;
}
// define the global descriptor table
InterruptHandler::_gate_descriptor InterruptHandler::interrupt_desc_table[N_ENTRIES];
// define the constructor. Takes a pointer to the global
// descriptor table
InterruptHandler::InterruptHandler(GlobalDescriptorTable* global_desc_table)
{
// grab the offset of the usable memory within our global segment
uint16_t seg = global_desc_table->CodeSegmentSelector();
// set all the entries in the IDT to block request initially
for (uint16_t i = 0; i < N_ENTRIES; i++)
{
// create an a gate for a system level interrupt, calling the block function (does nothing) using seg as its memory.
create_entry(i, seg, &block_request, PRIV_LVL_KERNEL, GATE_INTERRUPT);
}
// create a couple interrupts for 0x00 and 0x01, really 0x20 and 0x21 in memory
//create_entry(BASE_I_NUM + 0x00, seg, &isr0x00, PRIV_LVL_KERNEL, GATE_INTERRUPT);
//create_entry(BASE_I_NUM + 0x01, seg, &isr0x01, PRIV_LVL_KERNEL, GATE_INTERRUPT);
// init the PICs
pic_controller.send_master_cmd(PIC_INIT);
pic_controller.send_slave_cmd(PIC_INIT);
// tell master pic to add 0x20 to any interrupt number it sends to CPU, while slave pic sends 0x28 + i_number
pic_controller.send_master_data(PIC_OFFSET_MASTER);
pic_controller.send_slave_data(PIC_OFFSET_SLAVE);
// set the interrupt vectoring to cascade and tell master that there is a slave PIC at IRQ2
pic_controller.send_master_data(ICW1_INTERVAL4);
pic_controller.send_slave_data(ICW1_SINGLE);
// set the PICs to work in 8086 mode
pic_controller.send_master_data(ICW1_8086);
pic_controller.send_slave_data(ICW1_8086);
// send 0s
pic_controller.send_master_data(DEFAULT_MASK);
pic_controller.send_slave_data(DEFAULT_MASK);
// tell the cpu to use the table
interrupt_desc_table_pointerdata idt_ptr;
//set the size
idt_ptr.table_size = N_ENTRIES * sizeof(_gate_descriptor) - 1;
// set the base address
idt_ptr.base_addr = (uint32_t)interrupt_desc_table;
// use lidt instruction to load the table
// the cpu will map interrupts to the table
asm volatile("lidt %0" : : "m" (idt_ptr));
// issue debug print
printf_boot_message(" 2: Created Interrupt Desc Table...\n");
}
// define the destructor of the class
InterruptHandler::~InterruptHandler()
{
}
// function to make entries in the IDT
// takes the interrupt number as an index, the segment offset it used to specify which memory segment to use
// a pointer to the function to call, the flags and access level.
void InterruptHandler::create_entry(uint8_t i_number, uint16_t segment_desc_offset, void (*isr)(), uint8_t priv_lvl, uint8_t desc_type)
{
// set the i_number'th entry to the given params
// take the lower bits of the pointer
interrupt_desc_table[i_number].handler_lower_bits = ((uint32_t)isr) & 0xFFFF;
// take the upper bits
interrupt_desc_table[i_number].handler_upper_bits = (((uint32_t)isr) >> 16) & 0xFFFF;
// calculate the privilage byte, setting the correct bits
interrupt_desc_table[i_number].priv_lvl = 0x80 | ((priv_lvl & 3) << 5) | desc_type;
interrupt_desc_table[i_number].segment_desc_offset = segment_desc_offset;
// reserved byte is always 0
interrupt_desc_table[i_number].reserved_byte = 0;
}
// need a function to block or ignore any requests
// that we dont want to service. Requests could be caused
// by devices we haven't yet configured when testing the os.
void InterruptHandler::block_request()
{
// do nothing
}
// function to tell the CPU to send interrupts
// to this table
void InterruptHandler::set_active()
{
// call sti assembly to start interrup poling at the CPU level
asm volatile("sti"); // <- calling this crashes the kernel
// issue debug print
printf_boot_message(" 4: Activated sys interrupts...\n");
}
And here is the code for my GDT, I followed the os dev wiki guide for this:
#include "global_desc_table.h"
/**
* A code segment is identified by flag 0x9A, cannot write to a code segment
* while a data segment is identified by flag 0x92
*
* Based on the C code present on OSDEV Wiki
*/
GlobalDescriptorTable::GlobalDescriptorTable() : nullSegmentSelector(0, 0, 0),
unusedSegmentSelector(0, 0, 0),
codeSegmentSelector(0, 64*1024*1024, 0x9A),
dataSegmentSelector(0, 64*1024*1024, 0x92)
{
//8 bytes defined, but processor expects 6 bytes only
uint32_t i[2];
//first 4 bytes is address of table
i[0] = (uint32_t)this;
//second 4 bytes, the high bytes, are size of global desc table
i[1] = sizeof(GlobalDescriptorTable) << 16;
// tell processor to use this table using its ldgt function
asm volatile("lgdt (%0)" : : "p" (((uint8_t *) i) + 2));
// issue debug print
printf_boot_message(" 1: Created Global Desc Table...\n");
}
// function to get the offset of the datasegment selector
uint16_t GlobalDescriptorTable::DataSegmentSelector()
{
// calculate the offset by subtracting the table's address from the datasegment's address
return (uint8_t *) &dataSegmentSelector - (uint8_t*)this;
}
// function to get the offset of the code segment
uint16_t GlobalDescriptorTable::CodeSegmentSelector()
{
// calculate the offset by subtracting the table's address from the code segment's address
return (uint8_t *) &codeSegmentSelector - (uint8_t*)this;
}
// default destructor
GlobalDescriptorTable::~GlobalDescriptorTable()
{
}
/**
* The constructor to create a new entry segment, set the flags, determine the formatting for the limit, and set the base
*/
GlobalDescriptorTable::SegmentDescriptor::SegmentDescriptor(uint32_t base, uint32_t limit, uint8_t flags)
{
uint8_t* target = (uint8_t*)this;
//if 16 bit limit
if (limit <= 65536)
{
// tell processor that this is a 16bit entry
target[6] = 0x40;
} else {
// if the last 12 bits of limit are not 1s
if ((limit & 0xFFF) != 0xFFF)
{
limit = (limit >> 12) - 1;
} else {
limit >>= 12;
}
// indicate that there was a shift of 12 done
target[6] = 0xC0;
}
// set the lower and upper 2 lowest bytes of limit
target[0] = limit & 0xFF;
target[1] = (limit >> 8) & 0xFF;
//the rest of limit must go in lower 4 bit of byte 6, and byte 5
target[6] |= (limit >> 16) & 0xF;
//encode the pointer
target[2] = base & 0xFF;
target[3] = (base >> 8) & 0xFF;
target[4] = (base >> 16) & 0xFF;
target[7] = (base >> 24) & 0xFF;
// set the flags
target[5] = flags;
}
/**
* Define the methods to get the base pointer from an segment and
* the limit for a segment, taken from os wiki
*/
uint32_t GlobalDescriptorTable::SegmentDescriptor::Base()
{
// simply do the reverse of wht was done to place the pointer in
uint8_t* target = (uint8_t*) this;
uint32_t result = target[7];
result = (result << 8) + target[4];
result = (result << 8) + target[3];
result = (result << 8) + target[2];
return result;
}
uint32_t GlobalDescriptorTable::SegmentDescriptor::Limit()
{
uint8_t* target = (uint8_t *)this;
uint32_t result = target[6] & 0xF;
result = (result << 8) + target[1];
result = (result << 8) + target[0];
//check if there was a shift of 12
if (target[6] & 0xC0 == 0xC0)
{
result = (result << 12) & 0xFFF;
}
return result;
}

i[0] = (uint32_t)this;
//second 4 bytes, the high bytes, are size of global desc table
i[1] = sizeof(GlobalDescriptorTable) << 16;
I've had the same problem, just swap the 0 and 1 in between:
i[1] = (uint32_t)this;
//second 4 bytes, the high bytes, are size of global desc table
i[0] = sizeof(GlobalDescriptorTable) << 16;
That's the problem if you are following the same tutorial and I think you do if you came here.

Sometimes due to wrong idtr value also(invalid pointer causing crash)
check the idtr reg value in vbox log
if u load idt in protected mode address of idt shows some wierd changes(shifted left 16bits or some value in lower 16 bit)
try changing pointer according to that(thats how i did) or use lidt in before entering protected mode(this is also tested)

There was a bug in my GDT that forced the kernel to read an invalid pointer from the segment. This caused a seg fault.

Related

8b10b encoder with byte stream output (bits carry): faster bitwise algorithm?

I have written a 8b10b encoder that generates a stream of bytes intended to be sent to a serial transmitter which sends the bytes as-is LSb first.
What I'm doing here is basically lay down groups of 10 bits (encoded from the input stream of bytes) on groups of 8, so a varying number of bits get carried over from one output byte to the next - kind of like in music/rhythm.
The program has been successfully tested, but it is about 4-5x too slow for my application. I think it comes from the fact that every bit has to be looked up in an array. My guts tell me we could make that faster by having some sort of rolling mask but I can't yet see how to do that even by swapping out the 3d array of booleans to a 2D array of integers.
Any pointer or other idea?
Here is the code. Please ignore most of the macros and some of the code related to deciding which byte is to be written as this is application-specific.
Header:
#ifndef TX_BYTESTREAM_GEN_H_INCLUDED
#define TX_BYTESTREAM_GEN_H_INCLUDED
#include <stdint.h> //for standard portable types such as uint16_t
#define MAX_USB_TRANSFER_SIZE 1016 //Bytes, size of the max payload in a USB transaction. Determined using FT4222_GetMaxTRansferSize()
#define MAX_USB_PACKET_SIZE 62 //Bytes, max size of the payload of a single USB packet
#define MANDATORY_TX_PACKET_BLOCK 5 //Bytes, constant - equal to the minimum number of bytes of TX packet necessary to exactly transfer blocks of 10 bits of encoded data (LCF of 8 and 10)
#define SYNC_CHARS_MAX_INTERVAL 172 //Target number of payload bytes between sync chars. Max is 188 before desynchronisation
#define ROUND_UP(N, S) ((((N) + (S) - 1) / (S)) * (S)) //Macro to round up the integer N to the largest multiple of the integer S
#define ROUND_DOWN(N,S) ((N / S) * S) //Same rounding down
#define N_SYNC_CHAR_PAIRS_IN_PCKT(pcktSz) (ROUND_UP((pcktSz*1000/(SYNC_CHARS_MAX_INTERVAL+2)),1000)/1000) //Number of sync (K28.5) character/byte pairs in a given packet
#define TX_PAYLOAD_SIZE(pcktSz) ((pcktSz*4/5)-2*N_SYNC_CHAR_PAIRS_IN_PCKT(pcktSz)) //Size in bytes of the payload data before encoding in a single TX packet
#define MAX_TX_PACKET_SIZE (ROUND_DOWN((MAX_USB_TRANSFER_SIZE-MAX_USB_PACKET_SIZE),(MAX_USB_PACKET_SIZE*MANDATORY_TX_PACKET_BLOCK))) //Maximum size in bytes of a TX packet
#define DEFAULT_TX_PACKET_SIZE (MAX_TX_PACKET_SIZE-MAX_USB_PACKET_SIZE*MANDATORY_TX_PACKET_BLOCK) //Default size in bytes of a TX packet with some margin
#define MAX_TX_PAYLOAD_SIZE (TX_PAYLOAD_SIZE(MAX_TX_PACKET_SIZE)) //Maximum size in bytes of the payload in a TX packet
#define DEFAULT_TX_PAYLOAD_SIZE (TX_PAYLOAD_SIZE(DEFAULT_TX_PACKET_SIZE))//Default size in bytes of the payload in a TX packet with some margin
//See string descriptors below for definitions. Error codes are individual bits so can be combined.
enum ErrCode
{
NO_ERR = 0,
INVALID_DIN_SIZE = 1,
INVALID_DOUT_SIZE = 2,
NULL_DIN_PTR = 4,
NULL_DOUT_PTR = 8
};
char const * const ERR_CODE_DESC[] = {
"No error",
"Invalid size of input data",
"Invalid size of output buffer",
"Input data pointer is NULL",
"Output buffer pointer is NULL"
};
/** #brief Generates the bytestream to the transmitter by encoding the incoming data using 8b10b encoding
and inserting K28.5 synchronisation characters to maintain the synchronisation with the demodulator (LVDS passthrough mode)
#arg din is a pointer to an allocated array of bytes which contains the data to encode
#arg dinSize is the size of din in bytes. This size must be equal to TX_PAYLOAD_SIZE(doutSize)
#arg dout is a pointer to an allocated array of bytes which is intended to contain the output bytestream to the transmitter
#arg doutSize is the size of dout in bytes. This size must meet the conditions at the top of this function's implementation. Use DEFAULT_TX_PACKET_SIZE if in doubt.
#return error code (c.f. ErrCode) **/
int TX_gen_bytestream(uint8_t *din, uint16_t dinSize, uint8_t *dout, uint16_t doutSize);
#endif // TX_BYTESTREAM_GEN_H_INCLUDED
Source file:
#include "TX_bytestream_gen.h"
#include <cstddef> //NULL
#define N_BYTE_VALUES (256+1) //256 possible data values + 1 special character (only accessible to this module)
#define N_ENCODED_BITS 10 //Number of bits corresponding to the 8b10b encoding of a byte
//Map the current running disparity, the desired value to encode to the array of encoded bits for 8b10b encoding.
//The Last value is the K28.5 sync character, only accessible to this module
//Notation = MSb to LSb
bool const encodedBits[2][N_BYTE_VALUES][N_ENCODED_BITS] =
{
//Long table (see appendix)
};
//New value of the running disparity after encoding with the specified previous running disparity and requested byte value (c.f. above)
bool const encodingDisparity[2][N_BYTE_VALUES] =
{
//Long table (see appendix)
};
int TX_gen_bytestream(uint8_t *din, uint16_t dinSize, uint8_t *dout, uint16_t doutSize)
{
static bool RDp = false; //Running disparity is initially negative
int ret = 0;
//If the output buffer size is not a multiple of the mandatory payload block or of the USB packet size, or if it cannot be held in a single USB transaction
//return an invalid output buffer size error
if(doutSize == 0 || (doutSize % MANDATORY_TX_PACKET_BLOCK) || (doutSize % MAX_USB_PACKET_SIZE) || (doutSize > MAX_TX_PACKET_SIZE)) //Temp
ret |= INVALID_DOUT_SIZE;
//If the input data size is not consistent with the output buffer size, return the appropriate error code
if(dinSize == 0 || dinSize != TX_PAYLOAD_SIZE(doutSize))
ret |= INVALID_DIN_SIZE;
if(din == NULL)
ret |= NULL_DIN_PTR;
if(dout == NULL)
ret |= NULL_DOUT_PTR;
//If everything checks out, carry on
if(ret == NO_ERR)
{
uint16_t iByteIn = 0; //Index of the byte of input data currently being processed
uint16_t iByteOut = 0; //Index of the output byte currently being written to
uint8_t iBitOut = 0; //Starts with LSb
int16_t nBytesUntilSync = 0; //Countdown of bytes until a sync marker needs to be sent. Cyclic.
//For all output bytes to generate
while(iByteOut < doutSize)
{
bool sync = false; //Initially this byte is not considered a sync byte (in which case the next byte of data will be processed)
//If the maximum interval between sync characters has been reached, mark the two next bytes as sync bytes and reset the counter
if(nBytesUntilSync <= 0)
{
sync = true;
if(nBytesUntilSync == -1) //After the second SYNC is written, the counter is reset
{
nBytesUntilSync = SYNC_CHARS_MAX_INTERVAL;
}
}
//Append bit by bit the encoded data of the byte to write to the output bitstream (carried over from byte to byte) - LSb first
//The byte to write is either the last byte of the encodedBits map (the sync character K28.5) if sync is set, or the next byte of
//input data if it isn't
uint16_t const byteToWrite = (sync?(N_BYTE_VALUES-1):din[iByteIn]);
for(int8_t iEncodedBit = N_ENCODED_BITS-1 ; iEncodedBit >= 0 ; --iEncodedBit, iBitOut++)
{
//If the current output byte is complete, reset the bit index and select the next one
if(iBitOut >= 8)
{
iByteOut++;
iBitOut = 0;
}
//Effectively sets the iBitOut'th bit of the iByteOut'th byte out to the encoded value of the byte to write
bool bitToWrite = encodedBits[RDp][byteToWrite][iEncodedBit]; //Temp
dout[iByteOut] ^= (-bitToWrite ^ dout[iByteOut]) & (1 << iBitOut);
}
//The running disparity is also updated as per the standard (to achieve DC balance)
RDp = encodingDisparity[RDp][byteToWrite]; //Update the running disparity
//If sync was not set, this means a byte of the input data has been processed, in which case take the next one in
//Also decrement the synchronisation counter
if(!sync) {
iByteIn++;
}
//In any case, decrease the synchronisation counter. Even sync characters decrease it (c.f. top of while loop)
nBytesUntilSync--;
}
}
return ret;
}
Testbench:
#include <iostream>
#include "TX_bytestream_gen.h"
#define PACKET_DURATION 0.000992 //In seconds, time of continuous data stream corresponding to one packet (5MHz output, default packet size)
#define TIME_TO_SIMULATE 10 //In seconds
#define PACKET_SIZE DEFAULT_TX_PACKET_SIZE
#define PAYLOAD_SIZE DEFAULT_TX_PAYLOAD_SIZE
#define N_ITERATIONS (TIME_TO_SIMULATE/PACKET_DURATION)
#include <chrono>
using namespace std;
//Testbench: measure the time taken to simulate TIME_TO_SIMULATE seconds of continuous encoding
int main()
{
uint8_t toEncode[PAYLOAD_SIZE] = {100}; //Dummy data, doesn't matter
uint8_t out[PACKET_SIZE] = {0};
std::chrono::time_point<std::chrono::system_clock> start, end;
start = std::chrono::system_clock::now();
for(unsigned int i = 0 ; i < N_ITERATIONS ; i++)
{
TX_gen_bytestream(toEncode, PAYLOAD_SIZE, out, PACKET_SIZE);
}
end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end - start;
std::cout << "Task execution time: " << elapsed_seconds.count()/TIME_TO_SIMULATE*100 << "% (for " << TIME_TO_SIMULATE << "s simulated)\n";
return 0;
}
Appendix: lookup tables. I don't have enough characters to paste it here, but it looks like so:
bool const encodedBits[2][N_BYTE_VALUES][N_ENCODED_BITS] =
{
//Running disparity = RD-
{
{1,0,0,1,1,1,0,1,0,0},
//...
},
//Running disparity = RD+
{
{0,1,1,0,0,0,1,0,1,1},
//...
}
};
bool const encodingDisparity[2][N_BYTE_VALUES] =
{
//Previous running disparity was RD-
{
0,
//...
},
//Previous running disparity was RD+
{
1,
//...
}
};
This will be a lot faster if you do everything a byte at time instead of a bit at a time.
First change the way you store your lookup tables. You should have something like:
// conversion from (RD, byte) to (RD, 10-bit code)
// in each word, the lower 10 bits are the code,
// and bit 10 (the 11th bit) is the new RD
// The first 256 values are for RD -1, the next
// for RD 1
static const uint16_t BYTE_TO_CODE[512] = {
...
}
Then you need to change our encoding loop to write a byte at a time. You can use a uint16_t to store the leftover bits from each byte you output.
Something like this (I didn't figure out your sync byte logic, but presumably you can put that in the input or output byte loop):
// returns next isRD1
bool TX_gen_bytestream(uint8_t *dest, const uint8_t *src, size_t src_len, bool isRD1)
{
// bits generated, but not yet written, LSB first
uint16_t bits = 0;
// number of bits in bits
unsigned numbits = 0;
// current RD, either 0 or 256
uint16_t rd = isRD1 ? 256 : 0;
for (const uint8_t *end = src + src_len; src < end; ++src) {
// lookup code and next rd
uint16_t code = BYTE_TO_CODE[rd + *src];
// new rd from code bit 10
rd = (code>>2) & 256;
// store bits
bits |= (code & (uint16_t)0x03FF) << numbits;
numbits+=10;
// write out any complete bytes
while(numbits >= 8) {
*dest++ = (uint8_t)bits;
bits >>=8;
numbits-=8;
}
}
// If src_len isn't divisible by 4, then we have some extra bits
if (numbits) {
*dest = (uint8_t)bits;
}
return !!rd;
}

How to improve time efficiency of CRC-5 calculation?

I have just started to study the CRC and how to implement it in software. My information source is mainly following document. Here is mentioned some simple algorithm for calculating CRC for any generator polynomial. I have attempted to write this algorithm in C++ language. I have tested it for generator polynomial x^5 + x^4 + x^2 + 1 (CRC-5) (generator polynomial used by chip) with usage of this online calculator and it works.
#include <iostream>
using namespace std;
int main() {
uint8_t data_byte = 0x31;
// polynom x^5 + x^4 + x^2 + 1
uint16_t polynom = 0x35;
// register contains 0 at the beginning
uint32_t crc = 0;
uint32_t message = 0;
// shift the message byte to left by so many bits which is needed for generator polynomial
message = (data_byte << 5);
// now the message byte is 13 bits long
uint8_t processed_bit = 13;
while(processed_bit > 0) {
// prepare free space for new bit from the message byte
crc = crc << 1;
// find out value of current msb in the message byte
message = message << 1;
if(message & 0x2000) {
// msb in message byte is "1"
// lsb in register is set to "1"
crc |= 1U;
} else {
// msb in message byte is "0"
// lsb in register is set to "0"
crc &= ~1U;
}
// remove msb from message byte
message = message & ~0x2000;
if(crc & 0x20) {
// subtract current multiple of the generator polynomial
crc = crc ^ polynom;
}
// remove msb from the register
crc = crc & ~0x20;
processed_bit--;
}
cout << "CRC: " << (int)crc << endl;
return 0;
}
It is obvious that this program is uneffective as far as execution time. So I have been thinking about a possibility how to improve it in this perspective. I know that there is a variant to use the look-up table containing the precalculated reminders but I would like to avoid this method. Does anybody know how to improve the above mentioned algorithm from the execution time perspective? Thanks in advance for any suggestions.
Just a quick glance shows several unnecessary statements. You don't need crc &= ~1U;, since the crc = crc << 1; already put a zero there. You don't need message = message & ~0x2000;, since you are only ever looking at one bit in there. Just let the other bits shift up and away. You don't need the crc = crc & ~0x20;, since the exclusive-or with the polynomial already did that.
If you read the document you linked, you will find that you do not need to process five more bits (13 total). You only need to process the eight message bits. Also reading that document, you do not need to feed in the message bits one at a time. You can exclusive-or the message byte directly onto the CRC register, and then process the eight bits all in the register.
Finally, you can speed up the calculation significantly with a table look up, processing eight bits at a time instead of one bit at a time. This is also described beautifully in the document you linked. You can find code here to automatically generate the table and C code for the calculation.
In the end though, none of this matters if you're not calculating the right thing to begin with. You need to verify the calculation with data from the chip first. I found this document with details on the CRC calculation for that chip. You need to spend some time with it and understand it in detail.
To answer your question directly, here is some code that does what your code does, but is much simpler. Also it is extended to work on n bits, not just eight. It does n loops instead of n+5 loops:
// Return a CRC-5 of the low n bits of data. The remaining bits of data must be
// zero. n must be in [5..32].
uint8_t crc5(uint32_t data, int n) {
int shift = n - 5;
uint32_t poly = (uint32_t)0x15 << shift;
uint32_t top = (uint32_t)1 << (n - 1);
do {
data = data & top ? (data << 1) ^ poly : data << 1;
} while (--n);
return (data >> shift) & 0x1f;
}
Simpler and faster still is the equivalent of yours restricted to eight bits, unrolled:
uint8_t crc5_8(uint8_t data) {
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
return data >> 3;
}
However neither can calculate what you need for your chip.

stm32f746 flash read after write returns null

I am saving settings to the flash memory and reading them back again. 2 of the values always comes back empty. However, the data IS written to flash, since after a reset the values read are the new saved values and not empty.
I started experiencing this problem after I did some code-refactoring after taking the code over from another company.
Saving and reading the settings back works when you actually do the following (old inefficient way):
save setting 0 - read setting 0
save setting 1 - read setting 1
...
save setting 13 read setting 13
This is EXTREMELY inefficient and slow since the same page with all the settings are read from flash, the whole block of flash cleared, the new setting put into the read buffer and then the whole block (with only 1 changed setting) are written to flash. And this happens for all 14 settings!! But it works ...
unsigned char Save_One_Setting(unsigned char Setting_Number, unsigned char* value, unsigned char length)
{
/* Program the user Flash area word by word
(area defined by FLASH_USER_START_ADDR and FLASH_USER_END_ADDR) ***********/
unsigned int a;
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
buf[a++] = *(__IO uint32_t *)Address;
Address = Address + 4;
}
memset(&buf[Setting_Number * 60], 0, 60); // Clear setting value
memcpy(&buf[Setting_Number * 60], &value[0], length); // Set setting value
Erase_User_Flash_Memory();
HAL_FLASH_Unlock();
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
if (HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD, Address, buf[a++]) == HAL_OK)
{
Address = Address + 4;
}
else
{
/* Error occurred while writing data in Flash memory.
User can add here some code to deal with this error */
while (1)
{
/* Make LED1 blink (100ms on, 2s off) to indicate error in Write operation */
BSP_LED_On(LED1);
HAL_Delay(100);
BSP_LED_Off(LED1);
HAL_Delay(2000);
}
}
}
/* Lock the Flash to disable the flash control register access (recommended
to protect the FLASH memory against possible unwanted operation) *********/
HAL_FLASH_Lock();
}
I changed this by actually, after reading the settings from the flash into a buffer, update all the changed settings in the buffer, then erase the flash block and write the buffer back to flash. Downside: my first and 4th values always comes back as NULL after saving this buffer to flash.
However, after a system reset the correct values are read from flash.
unsigned char Save_Settings(Save_Settings_struct* newSettings)
{
/* Program the user Flash area word by word
(area defined by FLASH_USER_START_ADDR and FLASH_USER_END_ADDR) ***********/
unsigned int a;
unsigned char readBack[60];
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
buf[a++] = *(__IO uint32_t *)Address;
Address = Address + 4;
}
a = 0;
while (a < S_MAXSETTING)
{
if (newSettings[a].settingNumber < S_MAXSETTING)
{
memset(&buf[a * 60], 0, 60); // Clear setting value
memcpy(&buf[a * 60], newSettings[a].settingValue, newSettings[a].settingLength); // Set setting value
}
++a;
}
Erase_User_Flash_Memory();
HAL_FLASH_Unlock();
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
if (HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD, Address, buf[a++]) == HAL_OK)
{
Address = Address + 4;
}
else
{
/* Error occurred while writing data in Flash memory.
User can add here some code to deal with this error */
while (1)
{
/* Make LED1 blink (100ms on, 2s off) to indicate error in Write operation */
BSP_LED_On(LED1);
HAL_Delay(100);
BSP_LED_Off(LED1);
HAL_Delay(2000);
}
}
}
/* Lock the Flash to disable the flash control register access (recommended
to protect the FLASH memory against possible unwanted operation) *********/
HAL_FLASH_Lock();
}
I started playing around with cleaning and invalidating the data cache. At least the 2 values are not NULL anymore, however, they are still the old values. All other values are the new, saved values. Do a reset, and all values are correct.
Anybody ever had some similar problem? Or maybe an idea of what I can try to get rid of this problem?

read address from atmel m90e26s using mcp2210

I'm working on a school project where we want to monitor the energy consumption using the atmel M90e26s chip.
I used the mcp2210 library and wrote this little testscript:
void talk(hid_device* handle) {
ChipSettingsDef chipDef;
//set GPIO pins to be CS
chipDef = GetChipSettings(handle);
for (int i = 0; i < 9; i++) {
chipDef.GP[i].PinDesignation = GP_PIN_DESIGNATION_CS;
chipDef.GP[i].GPIODirection = GPIO_DIRECTION_OUTPUT;
chipDef.GP[i].GPIOOutput = 1;
}
int r = SetChipSettings(handle, chipDef);
//configure SPI
SPITransferSettingsDef def;
def = GetSPITransferSettings(handle);
//chip select is GP4
def.ActiveChipSelectValue = 0xffef;
def.IdleChipSelectValue = 0xffff;
def.BitRate = 50000l;
def.SPIMode = 4;
//enable write
byte spiCmdBuffer[3];
//read 8 bytes
def.BytesPerSPITransfer = 3;
r = SetSPITransferSettings(handle, def);
if (r != 0) {
printf("Errror setting SPI parameters.\n");
return;
}
spiCmdBuffer[0] = 0x01; //0000 0011 read
spiCmdBuffer[1] = 0x00; //address 0x00
SPIDataTransferStatusDef def1 = SPISendReceive(handle, spiCmdBuffer, 3);
for (int i = 0; i < 8; i++)
printf("%hhu\n", def1.DataReceived[i]);
}
Any address I try, I get no respons. The problem seems this:
spiCmdBuffer[0] = 0x01; //0000 0011 read
spiCmdBuffer[1] = 0x00; //address 0x00
I know from the datasheet that the spi interface looks like this:
spi interface
Can somebody help me to find the address registers from the atm90e26? All the addresses look like '01H', but that is not hexadecimal and it's not 7 bit either.
Yes, as you suspected the problem is in how you set the contents of spiCmdBuffer. The ATM90E26 expects both the read/write flag and the register address to be in the first byte of the SPI transaction: the read/write flag must be put at the most significant bit (value 1 to read from a register, value 0 to write to a register), while the register address is in the 7 remaining bits. So for example to read the register at address 0x01 (SysStatus) the code would look like:
spiCmdBuffer[0] = 0x80 | 0x01; // read System Status
The 0x80 value sets the read/write flag in the most significant bit, and the other value indicates the register address. The second and third bytes of a 3-byte read sequence don't need to be set to anything, since they are ignored by the ATM90E26.
After calling SPISendReceive(), to extract the register contents (16 bits) you have to read the second and third byte (MSB first) from the data received in the read transaction, like below:
uint16_t regValue = (((uint16_t)def1.DataReceived[1]) << 8) | def1.DataReceived[2];

DMA write to SD card (SSP) doesn't write bytes

I'm currently working on replacing a blocking busy-wait implementation of an SD card driver over SSP with a non-blocking DMA implementation. However, there are no bytes actually written, even though everything seems to go according to plan (no error conditions are ever found).
First some code (C++):
(Disclaimer: I'm still a beginner in embedded programming so code is probably subpar)
namespace SD {
bool initialize() {
//Setup SSP and detect SD card
//... (removed since not relevant for question)
//Setup DMA
LPC_SC->PCONP |= (1UL << 29);
LPC_GPDMA->Config = 0x01;
//Enable DMA interrupts
NVIC_EnableIRQ(DMA_IRQn);
NVIC_SetPriority(DMA_IRQn, 4);
//enable SSP interrupts
NVIC_EnableIRQ(SSP2_IRQn);
NVIC_SetPriority(SSP2_IRQn, 4);
}
bool write (size_t block, uint8_t const * data, size_t blocks) {
//TODO: support more than one block
ASSERT(blocks == 1);
printf("Request sd semaphore (write)\n");
sd_semaphore.take();
printf("Writing to block " ANSI_BLUE "%d" ANSI_RESET "\n", block);
memcpy(SD::write_buffer, data, BLOCKSIZE);
//Start the write
uint8_t argument[4];
reset_argument(argument);
pack_argument(argument, block);
if (!send_command(CMD::WRITE_BLOCK, CMD_RESPONSE_SIZE::WRITE_BLOCK, response, argument)){
return fail();
}
assert_cs();
//needs 8 clock cycles
delay8(1);
//reset pending interrupts
LPC_GPDMA->IntTCClear = 0x01 << SD_DMACH_NR;
LPC_GPDMA->IntErrClr = 0x01 << SD_DMACH_NR;
LPC_GPDMA->SoftSReq = SD_DMA_REQUEST_LINES;
//Prepare channel
SD_DMACH->CSrcAddr = (uint32_t)SD::write_buffer;
SD_DMACH->CDestAddr = (uint32_t)&SD_SSP->DR;
SD_DMACH->CLLI = 0;
SD_DMACH->CControl = (uint32_t)BLOCKSIZE
| 0x01 << 26 //source increment
| 0x01 << 31; //Terminal count interrupt
SD_SSP->DMACR = 0x02; //Enable ssp write dma
SD_DMACH->CConfig = 0x1 //enable
| SD_DMA_DEST_PERIPHERAL << 6
| 0x1 << 11 //mem to peripheral
| 0x1 << 14 //enable error interrupt
| 0x1 << 15; //enable terminal count interrupt
return true;
}
}
extern "C" __attribute__ ((interrupt)) void DMA_IRQHandler(void) {
printf("dma irq\n");
uint8_t channelBit = 1 << SD_DMACH_NR;
if (LPC_GPDMA->IntStat & channelBit) {
if (LPC_GPDMA->IntTCStat & channelBit) {
printf(ANSI_GREEN "terminal count interrupt\n" ANSI_RESET);
LPC_GPDMA->IntTCClear = channelBit;
}
if (LPC_GPDMA->IntErrStat & channelBit) {
printf(ANSI_RED "error interrupt\n" ANSI_RESET);
LPC_GPDMA->IntErrClr = channelBit;
}
SD_DMACH->CConfig = 0;
SD_SSP->IMSC = (1 << 3);
}
}
extern "C" __attribute__ ((interrupt)) void SSP2_IRQHandler(void) {
if (SD_SSP->MIS & (1 << 3)) {
SD_SSP->IMSC &= ~(1 << 3);
printf("waiting until idle\n");
while(SD_SSP->SR & (1UL << 4));
//Stop transfer token
//I'm not sure if the part below up until deassert_cs is necessary.
//Adding or removing it made no difference.
SPI::send(0xFD);
{
uint8_t response;
unsigned int timeout = 4096;
do {
response = SPI::receive();
} while(response != 0x00 && --timeout);
if (timeout == 0){
deassert_cs();
printf("fail");
return;
}
}
//Now wait until the device isn't busy anymore
{
uint8_t response;
unsigned int timeout = 4096;
do {
response = SPI::receive();
} while(response != 0xFF && --timeout);
if (timeout == 0){
deassert_cs();
printf("fail");
return;
}
}
deassert_cs();
printf("idle\n");
SD::sd_semaphore.give_from_isr();
}
}
A few remarks about the code and setup:
Written for the lpc4088 with FreeRTOS
All SD_xxx defines are conditional defines to select the right pins (I need to use SSP2 in my dev setup, SSP0 for the final product)
All external function that are not defined in this snippet (e.g. pack_argument, send_command, semaphore.take() etc.) are known to be working correctly (most of these come from the working busy-wait SD implementation. I can't of course guarantee 100% that they are bugless, but they seem to be working right.).
Since I'm in the process of debugging this there are a lot of printfs and hardcoded SSP2 variables. These are of course temporarily.
I mostly used this as example code.
Now I have already tried the following things:
Write without DMA using busy-wait over SSP. As mentioned before I started with a working implementation of this, so I know the problem has to be in the DMA implementation and not somewhere else.
Write from mem->mem instead of mem->sd to eliminate the SSP peripheral. mem->mem worked fine, so the problem must be in the SSP part of the DMA setup.
Checked if the ISRs are called. They are: first the DMA IRS is called for the terminal count interrupt, and then the SSP2 IRS is called. So the IRSs are (probably) setup correctly.
Made a binary dump of the entire sd content to see if it the content might have been written to the wrong location. Result: the content send over DMA was not present anywhere on the SD card (I did this with any change I made to the code. None of it got the data on the SD card).
Added a long (~1-2 seconds) timeout in the SSP IRS by repeatedly requesting bytes from the SD card to make sure that there wasn't a timeout issue (e.g. that I tried to read the bytes before the SD card had the chance to process everything). This didn't change the outcome at all.
Unfortunately due to lack of hardware tools I haven't been able yet to verify if the bytes are actually send over the data lines.
What is wrong with my code, or where can I look to find the cause of this problem? After spending way more hours on this then I'd like to admit I really have no idea how to get this working and any help is appreciated!
UPDATE: I did a lot more testing, and thus I got some more results. The results below I got by writing 4 blocks of 512 bytes. Each block contains constantly increasing numbers module 256. Thus each block contains 2 sequences going from 0 to 255. Results:
Data is actually written to the SD card. However, it seems that the first block written is lost. I suppose there is some setup done in the write function that needs to be done earlier.
The bytes are put in a very weird (and wrong) order: I basically get alternating all even numbers followed by all odd numbers. Thus I first get even numbers 0x00 - 0xFE and then all odd numbers 0x01 - 0xFF (total number of written bytes seems to be correct, with the exception of the missing first block). However, there's even one exception in this sequence: each block contains 2 of these sequences (sequence is 256 bytes, block is 512), but the first sequence in each block has 0xfe and 0xff "swapped". That is, 0xFF is the end of the even numbers and 0xFE is the end of the odd series. I have no idea what kind of black magic is going on here. Just in case I've done something dumb here's the snippet that writes the bytes:
uint8_t block[512];
for (int i = 0; i < 512; i++) {
block[i] = (uint8_t)(i % 256);
}
if (!SD::write(10240, block, 1)) { //this one isn't actually written
WARN("noWrite", proc);
}
if (!SD::write(10241, block, 1)) {
WARN("noWrite", proc);
}
if (!SD::write(10242, block, 1)) {
WARN("noWrite", proc);
}
if (!SD::write(10243, block, 1)) {
WARN("noWrite", proc);
}
And here is the raw binary dump. Note that this exact pattern is fully reproducible: so far each time I tried this I got this exact same pattern.
Update2: Not sure if it's relevant, but I use sdram for memory.
When I finally got my hands on a logic analyzer I got a lot more information and was able to solve these problems.
There were a few small bugs in my code, but the bug that caused this behaviour was that I didn't send the "start block" token (0xFE) before the block and I didn't send the 16 bit (dummy) crc after the block. When I added these to the transfer buffer everything was written successfully!
So this fix was as followed:
bool write (size_t block, uint8_t const * data, size_t blocks) {
//TODO: support more than one block
ASSERT(blocks == 1);
printf("Request sd semaphore (write)\n");
sd_semaphore.take();
printf("Writing to block " ANSI_BLUE "%d" ANSI_RESET "\n", block);
SD::write_buffer[0] = 0xFE; //start block
memcpy(&SD::write_buffer[1], data, BLOCKSIZE);
SD::write_buffer[BLOCKSIZE + 1] = 0; //dummy crc
SD::write_buffer[BLOCKSIZE + 2] = 0;
//...
}
As a side note, the reason why the first block wasn't written was simply because I didn't wait until the device was ready before sending the first block. Doing so fixed the problem.