How to exploit double buffering for reading digital inputs state? - c++

I have following situation. I have a microcontroller which communicates with two external I/O expander chips via one SPI peripheral. Each of the chips has eight digital inputs and is equiped with the latch input which ensures that both bytes of the digital inputs can be sampled at one instant in time. To communicate the state of both the bytes into my microcontroller I need to do two SPI transactions. At the same time I need to ensure that the software in my microcontroller will work with consistent state of both the bytes.
My first idea how to solve this problem was to use sort of double buffer. Below is a pseudocode describing my idea.
uint8_t di_array_01[2] = {0};
uint8_t di_array_02[2] = {0};
uint8_t *ready_data = di_array_01;
uint8_t *shadow_data = di_array_02;
uint8_t *temp;
if(chip_0_data_received) {
*shadow_data = di_state_chip_0;
chip_0_data_received = false;
} else if(chip_1_data_received) {
*(shadow_data + 1) = di_state_chip_1;
temp = ready_data;
ready_data = shadow_data;
shadow_data = temp;
chip_1_data_received = false;
}
The higher software layer will always work with the content of the array pointed by the ready_data pointer. My intention is that setting of the boolean flags chip_0_data_received (chip_1_data_received) will be done in the "end of transaction" interrupt and the code below will be invoked from the background loop along with code for starting of the SPI transaction.
Does anybody see any potential problem which I have omitted?

If your data is only 16 bits in total you can read and write it atomically.
uint16_t combined_data;
// in reading function
if (chip_0_data_received && chip_1_data_received)
{
combined_data = (((uint16_t)di_state_chip_1 << 8) | di_state_chip_0);
}
// in using function
uint16_t get_combined_data = combined_data;
uint8_t data_chip_1 = ((get_combined_data >> 8) & 0xFF);
uint8_t data_chip_0 = ((get_combined_data >> 0) & 0xFF);

Related

C++ reading off ADC at pace of CLK frequency

I am using an hyperspectral sensor from Hamamatsu (C12880MA). So far, I got my firmware done to fit the Atmega328p, timings etc. are working great.
But now, I bumped into some issues regarding reading the measurements off the ADC.
According to my Clock-Signal generation (at variable f between 0.5 and 5 MHz), I need to read off the ADC values at exact "flag" values.
I am using the internal TC1 timer for clock generation. At each toggle of the CLK signal, I set a flag via an ISR to time the other signals / restart the program.
Now to the problem: I know (see datasheet for ref), that the VIDEO Signal of the sensor will appear between the flags "ADC_start" and "ADC_End" at the pace of the generated CLK. I have to read the values through the internal Arduino ADC at exactly the correct flag, to later match with the correct underlaying wavelengths.
Here are the essential snippets of the code:
volatile uint16_t flag = 0;
uint16_t data[288]; // define 1d matrix for data storage
uint16_t ST_Start = 2; // starter flag for the programm
uint16_t ST_End = 134; // end flag for ST signal
uint16_t ADC_Start = 310; //starter flag for ADC readout
uint16_t ADC_End = 886; //end flag for ADC readout
uint16_t index; //index used for data storage
volatile uint16_t End = 1000; // end flag for programm restart and clearing cache
The ISR handles increasing the flag on every CPU clock..
ISR(TIMER1_COMPA_vect){
if (flag <= End){
flag = flag+1;
} else
{
flag = 0;
//delayMicroseconds(2000);
}
this is my readData function. Unfortunately I have not found the correct way to read the ADC values at the exact flags needed. Say I need to read values at the flags 300, 302, 304, ..., 900 (on every second flag, between an set interval; ADC_Start to ADC_End)
void readData() {
if (flag >= ADC_Start && flag <= ADC_End ){
for (uint16_t i = ADC_Start; i < ADC_End; i = i+2)
{
index = i - ADC_Start;
data[index] = analogRead(VID);
}
} else { }
}
HereĀ“s the data sheet: https://www.hamamatsu.com/resources/pdf/ssd/c12880ma_kacc1226e.pdf
See p.7 for timings.
Thanks!

VirtualBox crashing when assembly "sti" is executed when testing Operating System

I'm currently attempting to build my own Operating System and have run into an issue when trying to test out my kernel code using VirtualBox.
The real issue arises when I call the assembly instruction sti as I'm currently attempting to implement an interrupt descriptor table and communicate with the PICs.
Here is the code that calls it. It's a function called kernel_main that is called from another assembly file. That file simply sets up the stack before executing any code from the OS, but there hasn't been any issues there, and everything works fine until I add the instruction asm("sti"); to the following code:
/* main function of our kernal
* accepts the pointer to multiboot and the magic code (no particular reason to take the magic number)
*
* use extern C to prevent gcc from changing the name
*/
extern "C" void kernel_main(void *multiboot_structure, uint32_t magic_number)
{
// can't use the standard printf as we're outside an OS currently
// we don't have access to glibc so we write our own printf
printf_boot_message("kernel.....\n");
// create the global descriptor table
GlobalDescriptorTable gdt;
// create the interrupt descriptor table
InterruptHandler interrupt_handler(&gdt);
// enable interrupts (test)
asm("sti"); // <- causes crash
// random debug printf
printf_boot_message("sti called\n");
// kernal never really stops, inf loop
while (1)
;
}
Below is the virtual box debug output, I've googled around for VINF_EM_TRIPLE_FAULT but mostly found RAM related issues that I don't think apply to me. The printf calls in the above code execute as expected followed by the VM immediately crashing stating the following:
Link to output as it's too large to post here: https://pastebin.com/jfPfhJUQ
Here is my interrupt handling code:
* Implementations of the interrupt handling routines in sys_interrupts.h
*/
#include "sys_interrupts.h"
#include "misc.h"
//handle() is used to take the interrupt number,
//i_number, and the address to the current CPU stack frame.
uint32_t InterruptHandler::handle(uint8_t i_number, uint32_t crnt_stkptr)
{
// debug
printf(" INTERRUPT");
// after the interrupt code has been executed,
// return the stack pointer so that the CPU can resume
// where it left off.
// this works for now as we do not have multiple
// concurrent processes running, so there is no issue
// of handling the threat number.
return crnt_stkptr;
}
// define the global descriptor table
InterruptHandler::_gate_descriptor InterruptHandler::interrupt_desc_table[N_ENTRIES];
// define the constructor. Takes a pointer to the global
// descriptor table
InterruptHandler::InterruptHandler(GlobalDescriptorTable* global_desc_table)
{
// grab the offset of the usable memory within our global segment
uint16_t seg = global_desc_table->CodeSegmentSelector();
// set all the entries in the IDT to block request initially
for (uint16_t i = 0; i < N_ENTRIES; i++)
{
// create an a gate for a system level interrupt, calling the block function (does nothing) using seg as its memory.
create_entry(i, seg, &block_request, PRIV_LVL_KERNEL, GATE_INTERRUPT);
}
// create a couple interrupts for 0x00 and 0x01, really 0x20 and 0x21 in memory
//create_entry(BASE_I_NUM + 0x00, seg, &isr0x00, PRIV_LVL_KERNEL, GATE_INTERRUPT);
//create_entry(BASE_I_NUM + 0x01, seg, &isr0x01, PRIV_LVL_KERNEL, GATE_INTERRUPT);
// init the PICs
pic_controller.send_master_cmd(PIC_INIT);
pic_controller.send_slave_cmd(PIC_INIT);
// tell master pic to add 0x20 to any interrupt number it sends to CPU, while slave pic sends 0x28 + i_number
pic_controller.send_master_data(PIC_OFFSET_MASTER);
pic_controller.send_slave_data(PIC_OFFSET_SLAVE);
// set the interrupt vectoring to cascade and tell master that there is a slave PIC at IRQ2
pic_controller.send_master_data(ICW1_INTERVAL4);
pic_controller.send_slave_data(ICW1_SINGLE);
// set the PICs to work in 8086 mode
pic_controller.send_master_data(ICW1_8086);
pic_controller.send_slave_data(ICW1_8086);
// send 0s
pic_controller.send_master_data(DEFAULT_MASK);
pic_controller.send_slave_data(DEFAULT_MASK);
// tell the cpu to use the table
interrupt_desc_table_pointerdata idt_ptr;
//set the size
idt_ptr.table_size = N_ENTRIES * sizeof(_gate_descriptor) - 1;
// set the base address
idt_ptr.base_addr = (uint32_t)interrupt_desc_table;
// use lidt instruction to load the table
// the cpu will map interrupts to the table
asm volatile("lidt %0" : : "m" (idt_ptr));
// issue debug print
printf_boot_message(" 2: Created Interrupt Desc Table...\n");
}
// define the destructor of the class
InterruptHandler::~InterruptHandler()
{
}
// function to make entries in the IDT
// takes the interrupt number as an index, the segment offset it used to specify which memory segment to use
// a pointer to the function to call, the flags and access level.
void InterruptHandler::create_entry(uint8_t i_number, uint16_t segment_desc_offset, void (*isr)(), uint8_t priv_lvl, uint8_t desc_type)
{
// set the i_number'th entry to the given params
// take the lower bits of the pointer
interrupt_desc_table[i_number].handler_lower_bits = ((uint32_t)isr) & 0xFFFF;
// take the upper bits
interrupt_desc_table[i_number].handler_upper_bits = (((uint32_t)isr) >> 16) & 0xFFFF;
// calculate the privilage byte, setting the correct bits
interrupt_desc_table[i_number].priv_lvl = 0x80 | ((priv_lvl & 3) << 5) | desc_type;
interrupt_desc_table[i_number].segment_desc_offset = segment_desc_offset;
// reserved byte is always 0
interrupt_desc_table[i_number].reserved_byte = 0;
}
// need a function to block or ignore any requests
// that we dont want to service. Requests could be caused
// by devices we haven't yet configured when testing the os.
void InterruptHandler::block_request()
{
// do nothing
}
// function to tell the CPU to send interrupts
// to this table
void InterruptHandler::set_active()
{
// call sti assembly to start interrup poling at the CPU level
asm volatile("sti"); // <- calling this crashes the kernel
// issue debug print
printf_boot_message(" 4: Activated sys interrupts...\n");
}
And here is the code for my GDT, I followed the os dev wiki guide for this:
#include "global_desc_table.h"
/**
* A code segment is identified by flag 0x9A, cannot write to a code segment
* while a data segment is identified by flag 0x92
*
* Based on the C code present on OSDEV Wiki
*/
GlobalDescriptorTable::GlobalDescriptorTable() : nullSegmentSelector(0, 0, 0),
unusedSegmentSelector(0, 0, 0),
codeSegmentSelector(0, 64*1024*1024, 0x9A),
dataSegmentSelector(0, 64*1024*1024, 0x92)
{
//8 bytes defined, but processor expects 6 bytes only
uint32_t i[2];
//first 4 bytes is address of table
i[0] = (uint32_t)this;
//second 4 bytes, the high bytes, are size of global desc table
i[1] = sizeof(GlobalDescriptorTable) << 16;
// tell processor to use this table using its ldgt function
asm volatile("lgdt (%0)" : : "p" (((uint8_t *) i) + 2));
// issue debug print
printf_boot_message(" 1: Created Global Desc Table...\n");
}
// function to get the offset of the datasegment selector
uint16_t GlobalDescriptorTable::DataSegmentSelector()
{
// calculate the offset by subtracting the table's address from the datasegment's address
return (uint8_t *) &dataSegmentSelector - (uint8_t*)this;
}
// function to get the offset of the code segment
uint16_t GlobalDescriptorTable::CodeSegmentSelector()
{
// calculate the offset by subtracting the table's address from the code segment's address
return (uint8_t *) &codeSegmentSelector - (uint8_t*)this;
}
// default destructor
GlobalDescriptorTable::~GlobalDescriptorTable()
{
}
/**
* The constructor to create a new entry segment, set the flags, determine the formatting for the limit, and set the base
*/
GlobalDescriptorTable::SegmentDescriptor::SegmentDescriptor(uint32_t base, uint32_t limit, uint8_t flags)
{
uint8_t* target = (uint8_t*)this;
//if 16 bit limit
if (limit <= 65536)
{
// tell processor that this is a 16bit entry
target[6] = 0x40;
} else {
// if the last 12 bits of limit are not 1s
if ((limit & 0xFFF) != 0xFFF)
{
limit = (limit >> 12) - 1;
} else {
limit >>= 12;
}
// indicate that there was a shift of 12 done
target[6] = 0xC0;
}
// set the lower and upper 2 lowest bytes of limit
target[0] = limit & 0xFF;
target[1] = (limit >> 8) & 0xFF;
//the rest of limit must go in lower 4 bit of byte 6, and byte 5
target[6] |= (limit >> 16) & 0xF;
//encode the pointer
target[2] = base & 0xFF;
target[3] = (base >> 8) & 0xFF;
target[4] = (base >> 16) & 0xFF;
target[7] = (base >> 24) & 0xFF;
// set the flags
target[5] = flags;
}
/**
* Define the methods to get the base pointer from an segment and
* the limit for a segment, taken from os wiki
*/
uint32_t GlobalDescriptorTable::SegmentDescriptor::Base()
{
// simply do the reverse of wht was done to place the pointer in
uint8_t* target = (uint8_t*) this;
uint32_t result = target[7];
result = (result << 8) + target[4];
result = (result << 8) + target[3];
result = (result << 8) + target[2];
return result;
}
uint32_t GlobalDescriptorTable::SegmentDescriptor::Limit()
{
uint8_t* target = (uint8_t *)this;
uint32_t result = target[6] & 0xF;
result = (result << 8) + target[1];
result = (result << 8) + target[0];
//check if there was a shift of 12
if (target[6] & 0xC0 == 0xC0)
{
result = (result << 12) & 0xFFF;
}
return result;
}
i[0] = (uint32_t)this;
//second 4 bytes, the high bytes, are size of global desc table
i[1] = sizeof(GlobalDescriptorTable) << 16;
I've had the same problem, just swap the 0 and 1 in between:
i[1] = (uint32_t)this;
//second 4 bytes, the high bytes, are size of global desc table
i[0] = sizeof(GlobalDescriptorTable) << 16;
That's the problem if you are following the same tutorial and I think you do if you came here.
Sometimes due to wrong idtr value also(invalid pointer causing crash)
check the idtr reg value in vbox log
if u load idt in protected mode address of idt shows some wierd changes(shifted left 16bits or some value in lower 16 bit)
try changing pointer according to that(thats how i did) or use lidt in before entering protected mode(this is also tested)
There was a bug in my GDT that forced the kernel to read an invalid pointer from the segment. This caused a seg fault.

read address from atmel m90e26s using mcp2210

I'm working on a school project where we want to monitor the energy consumption using the atmel M90e26s chip.
I used the mcp2210 library and wrote this little testscript:
void talk(hid_device* handle) {
ChipSettingsDef chipDef;
//set GPIO pins to be CS
chipDef = GetChipSettings(handle);
for (int i = 0; i < 9; i++) {
chipDef.GP[i].PinDesignation = GP_PIN_DESIGNATION_CS;
chipDef.GP[i].GPIODirection = GPIO_DIRECTION_OUTPUT;
chipDef.GP[i].GPIOOutput = 1;
}
int r = SetChipSettings(handle, chipDef);
//configure SPI
SPITransferSettingsDef def;
def = GetSPITransferSettings(handle);
//chip select is GP4
def.ActiveChipSelectValue = 0xffef;
def.IdleChipSelectValue = 0xffff;
def.BitRate = 50000l;
def.SPIMode = 4;
//enable write
byte spiCmdBuffer[3];
//read 8 bytes
def.BytesPerSPITransfer = 3;
r = SetSPITransferSettings(handle, def);
if (r != 0) {
printf("Errror setting SPI parameters.\n");
return;
}
spiCmdBuffer[0] = 0x01; //0000 0011 read
spiCmdBuffer[1] = 0x00; //address 0x00
SPIDataTransferStatusDef def1 = SPISendReceive(handle, spiCmdBuffer, 3);
for (int i = 0; i < 8; i++)
printf("%hhu\n", def1.DataReceived[i]);
}
Any address I try, I get no respons. The problem seems this:
spiCmdBuffer[0] = 0x01; //0000 0011 read
spiCmdBuffer[1] = 0x00; //address 0x00
I know from the datasheet that the spi interface looks like this:
spi interface
Can somebody help me to find the address registers from the atm90e26? All the addresses look like '01H', but that is not hexadecimal and it's not 7 bit either.
Yes, as you suspected the problem is in how you set the contents of spiCmdBuffer. The ATM90E26 expects both the read/write flag and the register address to be in the first byte of the SPI transaction: the read/write flag must be put at the most significant bit (value 1 to read from a register, value 0 to write to a register), while the register address is in the 7 remaining bits. So for example to read the register at address 0x01 (SysStatus) the code would look like:
spiCmdBuffer[0] = 0x80 | 0x01; // read System Status
The 0x80 value sets the read/write flag in the most significant bit, and the other value indicates the register address. The second and third bytes of a 3-byte read sequence don't need to be set to anything, since they are ignored by the ATM90E26.
After calling SPISendReceive(), to extract the register contents (16 bits) you have to read the second and third byte (MSB first) from the data received in the read transaction, like below:
uint16_t regValue = (((uint16_t)def1.DataReceived[1]) << 8) | def1.DataReceived[2];

DMA write to SD card (SSP) doesn't write bytes

I'm currently working on replacing a blocking busy-wait implementation of an SD card driver over SSP with a non-blocking DMA implementation. However, there are no bytes actually written, even though everything seems to go according to plan (no error conditions are ever found).
First some code (C++):
(Disclaimer: I'm still a beginner in embedded programming so code is probably subpar)
namespace SD {
bool initialize() {
//Setup SSP and detect SD card
//... (removed since not relevant for question)
//Setup DMA
LPC_SC->PCONP |= (1UL << 29);
LPC_GPDMA->Config = 0x01;
//Enable DMA interrupts
NVIC_EnableIRQ(DMA_IRQn);
NVIC_SetPriority(DMA_IRQn, 4);
//enable SSP interrupts
NVIC_EnableIRQ(SSP2_IRQn);
NVIC_SetPriority(SSP2_IRQn, 4);
}
bool write (size_t block, uint8_t const * data, size_t blocks) {
//TODO: support more than one block
ASSERT(blocks == 1);
printf("Request sd semaphore (write)\n");
sd_semaphore.take();
printf("Writing to block " ANSI_BLUE "%d" ANSI_RESET "\n", block);
memcpy(SD::write_buffer, data, BLOCKSIZE);
//Start the write
uint8_t argument[4];
reset_argument(argument);
pack_argument(argument, block);
if (!send_command(CMD::WRITE_BLOCK, CMD_RESPONSE_SIZE::WRITE_BLOCK, response, argument)){
return fail();
}
assert_cs();
//needs 8 clock cycles
delay8(1);
//reset pending interrupts
LPC_GPDMA->IntTCClear = 0x01 << SD_DMACH_NR;
LPC_GPDMA->IntErrClr = 0x01 << SD_DMACH_NR;
LPC_GPDMA->SoftSReq = SD_DMA_REQUEST_LINES;
//Prepare channel
SD_DMACH->CSrcAddr = (uint32_t)SD::write_buffer;
SD_DMACH->CDestAddr = (uint32_t)&SD_SSP->DR;
SD_DMACH->CLLI = 0;
SD_DMACH->CControl = (uint32_t)BLOCKSIZE
| 0x01 << 26 //source increment
| 0x01 << 31; //Terminal count interrupt
SD_SSP->DMACR = 0x02; //Enable ssp write dma
SD_DMACH->CConfig = 0x1 //enable
| SD_DMA_DEST_PERIPHERAL << 6
| 0x1 << 11 //mem to peripheral
| 0x1 << 14 //enable error interrupt
| 0x1 << 15; //enable terminal count interrupt
return true;
}
}
extern "C" __attribute__ ((interrupt)) void DMA_IRQHandler(void) {
printf("dma irq\n");
uint8_t channelBit = 1 << SD_DMACH_NR;
if (LPC_GPDMA->IntStat & channelBit) {
if (LPC_GPDMA->IntTCStat & channelBit) {
printf(ANSI_GREEN "terminal count interrupt\n" ANSI_RESET);
LPC_GPDMA->IntTCClear = channelBit;
}
if (LPC_GPDMA->IntErrStat & channelBit) {
printf(ANSI_RED "error interrupt\n" ANSI_RESET);
LPC_GPDMA->IntErrClr = channelBit;
}
SD_DMACH->CConfig = 0;
SD_SSP->IMSC = (1 << 3);
}
}
extern "C" __attribute__ ((interrupt)) void SSP2_IRQHandler(void) {
if (SD_SSP->MIS & (1 << 3)) {
SD_SSP->IMSC &= ~(1 << 3);
printf("waiting until idle\n");
while(SD_SSP->SR & (1UL << 4));
//Stop transfer token
//I'm not sure if the part below up until deassert_cs is necessary.
//Adding or removing it made no difference.
SPI::send(0xFD);
{
uint8_t response;
unsigned int timeout = 4096;
do {
response = SPI::receive();
} while(response != 0x00 && --timeout);
if (timeout == 0){
deassert_cs();
printf("fail");
return;
}
}
//Now wait until the device isn't busy anymore
{
uint8_t response;
unsigned int timeout = 4096;
do {
response = SPI::receive();
} while(response != 0xFF && --timeout);
if (timeout == 0){
deassert_cs();
printf("fail");
return;
}
}
deassert_cs();
printf("idle\n");
SD::sd_semaphore.give_from_isr();
}
}
A few remarks about the code and setup:
Written for the lpc4088 with FreeRTOS
All SD_xxx defines are conditional defines to select the right pins (I need to use SSP2 in my dev setup, SSP0 for the final product)
All external function that are not defined in this snippet (e.g. pack_argument, send_command, semaphore.take() etc.) are known to be working correctly (most of these come from the working busy-wait SD implementation. I can't of course guarantee 100% that they are bugless, but they seem to be working right.).
Since I'm in the process of debugging this there are a lot of printfs and hardcoded SSP2 variables. These are of course temporarily.
I mostly used this as example code.
Now I have already tried the following things:
Write without DMA using busy-wait over SSP. As mentioned before I started with a working implementation of this, so I know the problem has to be in the DMA implementation and not somewhere else.
Write from mem->mem instead of mem->sd to eliminate the SSP peripheral. mem->mem worked fine, so the problem must be in the SSP part of the DMA setup.
Checked if the ISRs are called. They are: first the DMA IRS is called for the terminal count interrupt, and then the SSP2 IRS is called. So the IRSs are (probably) setup correctly.
Made a binary dump of the entire sd content to see if it the content might have been written to the wrong location. Result: the content send over DMA was not present anywhere on the SD card (I did this with any change I made to the code. None of it got the data on the SD card).
Added a long (~1-2 seconds) timeout in the SSP IRS by repeatedly requesting bytes from the SD card to make sure that there wasn't a timeout issue (e.g. that I tried to read the bytes before the SD card had the chance to process everything). This didn't change the outcome at all.
Unfortunately due to lack of hardware tools I haven't been able yet to verify if the bytes are actually send over the data lines.
What is wrong with my code, or where can I look to find the cause of this problem? After spending way more hours on this then I'd like to admit I really have no idea how to get this working and any help is appreciated!
UPDATE: I did a lot more testing, and thus I got some more results. The results below I got by writing 4 blocks of 512 bytes. Each block contains constantly increasing numbers module 256. Thus each block contains 2 sequences going from 0 to 255. Results:
Data is actually written to the SD card. However, it seems that the first block written is lost. I suppose there is some setup done in the write function that needs to be done earlier.
The bytes are put in a very weird (and wrong) order: I basically get alternating all even numbers followed by all odd numbers. Thus I first get even numbers 0x00 - 0xFE and then all odd numbers 0x01 - 0xFF (total number of written bytes seems to be correct, with the exception of the missing first block). However, there's even one exception in this sequence: each block contains 2 of these sequences (sequence is 256 bytes, block is 512), but the first sequence in each block has 0xfe and 0xff "swapped". That is, 0xFF is the end of the even numbers and 0xFE is the end of the odd series. I have no idea what kind of black magic is going on here. Just in case I've done something dumb here's the snippet that writes the bytes:
uint8_t block[512];
for (int i = 0; i < 512; i++) {
block[i] = (uint8_t)(i % 256);
}
if (!SD::write(10240, block, 1)) { //this one isn't actually written
WARN("noWrite", proc);
}
if (!SD::write(10241, block, 1)) {
WARN("noWrite", proc);
}
if (!SD::write(10242, block, 1)) {
WARN("noWrite", proc);
}
if (!SD::write(10243, block, 1)) {
WARN("noWrite", proc);
}
And here is the raw binary dump. Note that this exact pattern is fully reproducible: so far each time I tried this I got this exact same pattern.
Update2: Not sure if it's relevant, but I use sdram for memory.
When I finally got my hands on a logic analyzer I got a lot more information and was able to solve these problems.
There were a few small bugs in my code, but the bug that caused this behaviour was that I didn't send the "start block" token (0xFE) before the block and I didn't send the 16 bit (dummy) crc after the block. When I added these to the transfer buffer everything was written successfully!
So this fix was as followed:
bool write (size_t block, uint8_t const * data, size_t blocks) {
//TODO: support more than one block
ASSERT(blocks == 1);
printf("Request sd semaphore (write)\n");
sd_semaphore.take();
printf("Writing to block " ANSI_BLUE "%d" ANSI_RESET "\n", block);
SD::write_buffer[0] = 0xFE; //start block
memcpy(&SD::write_buffer[1], data, BLOCKSIZE);
SD::write_buffer[BLOCKSIZE + 1] = 0; //dummy crc
SD::write_buffer[BLOCKSIZE + 2] = 0;
//...
}
As a side note, the reason why the first block wasn't written was simply because I didn't wait until the device was ready before sending the first block. Doing so fixed the problem.

fast reading constant data stream from serial port in C++.net

I'm trying to establish a SerialPort connection which transfers 16 bit data packages at a rate of 10-20 kHz. Im programming this in C++/CLI. The sender just enters an infinte while-loop after recieving the letter "s" and constantly sends 2 bytes with the data.
A Problem with the sending side is very unlikely, since a more simple approach works perfectly but too slow (in this approach, the reciever sends always an "a" first, and then gets 1 package consisting of 2 bytes. It leads to a speed of around 500Hz).
Here is the important part of this working but slow approach:
public: SerialPort^ port;
in main:
Parity p = (Parity)Enum::Parse(Parity::typeid, "None");
StopBits s = (StopBits)Enum::Parse(StopBits::typeid, "1");
port = gcnew SerialPort("COM16",384000,p,8,s);
port->Open();
and then doing as often as wanted:
port->Write("a");
int i = port->ReadByte();
int j = port->ReadByte();
This is now the actual approach im working with:
static int values[1000000];
static int counter = 0;
void reader(void)
{
SerialPort^ port;
Parity p = (Parity)Enum::Parse(Parity::typeid, "None");
StopBits s = (StopBits)Enum::Parse(StopBits::typeid, "1");
port = gcnew SerialPort("COM16",384000,p,8,s);
port->Open();
unsigned int i = 0;
unsigned int j = 0;
port->Write("s"); //with this command, the sender starts to send constantly
while(true)
{
i = port->ReadByte();
j = port->ReadByte();
values[counter] = j + (i*256);
counter++;
}
}
in main:
Thread^ readThread = gcnew Thread(gcnew ThreadStart(reader));
readThread->Start();
The counter increases (much more) rapidly at a rate of 18472 packages/s, but the values are somehow wrong.
Here is an example:
The value should look like this, with the last 4 bits changing randomly (its a signal of an analogue-digital converter):
111111001100111
Here are some values of the threaded solution given in the code:
1110011001100111
1110011000100111
1110011000100111
1110011000100111
So it looks like the connection reads the data in the middle of the package (to be exact: 3 bits too late). What can i do? I want to avoid a solution where this error is fixed later in the code while reading the packages like this, because I don't know if the the shifting error gets worse when I edit the reading code later, which I will do most likely.
Thanks in advance,
Nikolas
PS: If this helps, here is the code of the sender-side (an AtMega168), written in C.
uint8_t activate = 0;
void uart_puti16(uint16_t val) //function that writes the data to serial port
{
while ( !( UCSR0A & (1<<UDRE0)) ) //wait until serial port is ready
nop(); // wait 1 cycle
UDR0 = val >> 8; //write first byte to sending register
while ( !( UCSR0A & (1<<UDRE0)) ) //wait until serial port is ready
nop(); // wait 1 cycle
UDR0 = val & 0xFF; //write second byte to sending register
}
in main:
while(1)
{
if(active == 1)
{
uart_puti16(read()); //read is the function that gives a 16bit data set
}
}
ISR(USART_RX_vect) //interrupt-handler for a recieved byte
{
if(UDR0 == 'a') //if only 1 single data package is requested
{
uart_puti16(read());
}
if(UDR0 == 's') //for activating constant sending
{
active = 1;
}
if(UDR0 == 'e') //for deactivating constant sending
{
active = 0;
}
}
At the given bit rate of 384,000 you should get 38,400 bytes of data (8 bits of real data plus 2 framing bits) per second, or 19,200 two-byte values per second.
How fast is counter increasing in both instances? I would expect any modern computer to keep up with that rate whether using events or directly polling.
You do not show your simpler approach which is stated to work. I suggest you post that.
Also, set a breakpoint at the line
values[counter] = j + (i*256);
There, inspect i and j. Share the values you see for those variables on the very first iteration through the loop.
This is a guess based entirely on reading the code at http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.datareceived.aspx#Y228. With this caveat out of the way, here's my guess:
Your event handler is being called when data is available to read -- but you are only consuming two bytes of the available data. Your event handler may only be called every 1024 bytes. Or something similar. You might need to consume all the available data in the event handler for your program to continue as expected.
Try to re-write your handler to include a loop that reads until there is no more data available to consume.