I got a quite simple problem today. I have a matrix float gradient[COLS][ROWS]. As you probably know the float type includes 32 bits.
In my code I do 4 different checks on another table. For each of them I want to write in gradient[][] the results.
What I would like to do is write these results on 8 bits in gradient[][].
So the LSB would contain the result of the first Check, the 8 following bits the results of the second Check, and so on.
As for the reason I want to do this, it is because I'm trying to synthetize this code using HLS and make it run on a Xilinx ZedBoard. There is however not much memory available on the FPGA, so instead of storing the results of my 4 functions into 4 different matrix I woul like to store them in the same matrix using bit operations.
I know I can use masks with an AND operator like gradient[][]&0xFF. What I'm not sure however is when and how do I apply this mask ?
As an example here is the code for one of the Checks (sry for the spanish names i didn't write this) :
void FullCheck(float brightness_tab[COLS][ROWS]){
for(int i=0;i<ROWS;i++){
int previous_point = (int)(brightness_tab[0][i]);
for(int j=1;j<COLS-1;j++){
float brightness=brightness_tab[i][j];
int brightnessi=(int)(brightness);
gradient[i][j]=brightnessi- previous_point;
if(!(gradient[i][j]>VALOR_PENDIENTE || gradient[i][j]<-VALOR_PENDIENTE)){
if(!(gradient[i][j] + gradient[i][j-1] >VALOR_PENDIENTE_TRUNCAR || gradient[i][j] + gradient[i][j+1]<-VALOR_PENDIENTE_TRUNCAR)){
gradient[i][j]=0;
}
}
if(j<2 || i<2 || COLS-1 ==i){gradient[i][j]=0;}
previous_point=brightnessi;
}
}
}
Thank you in advance for your answers !
Deducing from your comments, I'll assume that gradient will be declared as an int array.
In your sample code, there are 2 cases for writing something to the matrix. In the first case, you want to write some value, such as this line:
gradient[i][j] = brightnessi - previous_point;
If you want to write some data to a specific byte, the data you want to write should be a 1-byte data itself.
gradient[i][j] = 0; // initialize to all zero bits
int data1 = 0x12; // 1-byte value
gradient[i][j] |= data1; // writing to the 1st byte (LSB)
int data2 = 0x34;
gradient[i][j] |= data2 << 8; // writing to the 2nd byte
int data3 = 0x56;
gradient[i][j] |= data3 << 16; // writing to the 3rd byte
int data4 = 0x78;
gradient[i][j] |= data4 << 24; // writing to the 4th byte
After executing above code, the value of gradient[i][j] will be 0x78563412.
The second case is clearing what you have written before by writing 0, such as this line:
gradient[i][j] = 0;
In this case you can do
gradient[i][j] &= 0xffffff00; // clearing the 1st byte (LSB)
gradient[i][j] &= 0xffff00ff; // clearing the 2nd byte
gradient[i][j] &= 0xff00ffff; // clearing the 3rd byte
gradient[i][j] &= 0x00ffffff; // clearing the 4th byte
You could also do a struct that has the same memory layout
struct Bytes
{
uint8_t a;
uint8_t b;
uint8_t c;
uint8_t d;
} ;
Bytes* g = reinterpret_cast<Bytes*>(&gradient[i][j]);
That way you can access the individual bytes easily like g->a
Related
I have just started to study the CRC and how to implement it in software. My information source is mainly following document. Here is mentioned some simple algorithm for calculating CRC for any generator polynomial. I have attempted to write this algorithm in C++ language. I have tested it for generator polynomial x^5 + x^4 + x^2 + 1 (CRC-5) (generator polynomial used by chip) with usage of this online calculator and it works.
#include <iostream>
using namespace std;
int main() {
uint8_t data_byte = 0x31;
// polynom x^5 + x^4 + x^2 + 1
uint16_t polynom = 0x35;
// register contains 0 at the beginning
uint32_t crc = 0;
uint32_t message = 0;
// shift the message byte to left by so many bits which is needed for generator polynomial
message = (data_byte << 5);
// now the message byte is 13 bits long
uint8_t processed_bit = 13;
while(processed_bit > 0) {
// prepare free space for new bit from the message byte
crc = crc << 1;
// find out value of current msb in the message byte
message = message << 1;
if(message & 0x2000) {
// msb in message byte is "1"
// lsb in register is set to "1"
crc |= 1U;
} else {
// msb in message byte is "0"
// lsb in register is set to "0"
crc &= ~1U;
}
// remove msb from message byte
message = message & ~0x2000;
if(crc & 0x20) {
// subtract current multiple of the generator polynomial
crc = crc ^ polynom;
}
// remove msb from the register
crc = crc & ~0x20;
processed_bit--;
}
cout << "CRC: " << (int)crc << endl;
return 0;
}
It is obvious that this program is uneffective as far as execution time. So I have been thinking about a possibility how to improve it in this perspective. I know that there is a variant to use the look-up table containing the precalculated reminders but I would like to avoid this method. Does anybody know how to improve the above mentioned algorithm from the execution time perspective? Thanks in advance for any suggestions.
Just a quick glance shows several unnecessary statements. You don't need crc &= ~1U;, since the crc = crc << 1; already put a zero there. You don't need message = message & ~0x2000;, since you are only ever looking at one bit in there. Just let the other bits shift up and away. You don't need the crc = crc & ~0x20;, since the exclusive-or with the polynomial already did that.
If you read the document you linked, you will find that you do not need to process five more bits (13 total). You only need to process the eight message bits. Also reading that document, you do not need to feed in the message bits one at a time. You can exclusive-or the message byte directly onto the CRC register, and then process the eight bits all in the register.
Finally, you can speed up the calculation significantly with a table look up, processing eight bits at a time instead of one bit at a time. This is also described beautifully in the document you linked. You can find code here to automatically generate the table and C code for the calculation.
In the end though, none of this matters if you're not calculating the right thing to begin with. You need to verify the calculation with data from the chip first. I found this document with details on the CRC calculation for that chip. You need to spend some time with it and understand it in detail.
To answer your question directly, here is some code that does what your code does, but is much simpler. Also it is extended to work on n bits, not just eight. It does n loops instead of n+5 loops:
// Return a CRC-5 of the low n bits of data. The remaining bits of data must be
// zero. n must be in [5..32].
uint8_t crc5(uint32_t data, int n) {
int shift = n - 5;
uint32_t poly = (uint32_t)0x15 << shift;
uint32_t top = (uint32_t)1 << (n - 1);
do {
data = data & top ? (data << 1) ^ poly : data << 1;
} while (--n);
return (data >> shift) & 0x1f;
}
Simpler and faster still is the equivalent of yours restricted to eight bits, unrolled:
uint8_t crc5_8(uint8_t data) {
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
data = data & 0x80 ? (data << 1) ^ 0xa8 : data << 1;
return data >> 3;
}
However neither can calculate what you need for your chip.
I'm working on a project where I have to send values of 32 bits over UART to MATLAB where I need to print them in the MATLAB terminal. I do this by splitting up the 32 bit value into 8 bit values like so (:
void Configurator::send(void) {
/**
* Split the 32 bits in chunks of 4 bytes of 8 bits
*/
union {
uint32_t data;
uint8_t bytes[4];
} splitData;
splitData.data = 1234587;
for (int n : splitData.bytes) {
XUartPs_SendByte(STDOUT_BASEADDRESS, splitData.bytes[n]);
}
}
In MATLAB I receive the following 4 bytes:
252
230
25
155
Now the question is, how do I restore the 1234587?
Am I correct in creating an array of size 4 as uint8_t? I would also like to note that I'm using union for readability. If I'm doing it wrong, I'd be happy to hear why!
You could use left shift to restore the value
uint32_t value = (byte[3]<<24) + (byte[2]<<16) + (byte[1]<<8) + (byte[0]<<0);
Try to avoid using unions for this sort of thing. It is not (in principle) portable, and can cause undefined behaviour. Instead write it like this:
void Configurator::send(void) {
/**
* Split the 32 bits in chunks of 4 bytes of 8 bits
*/
uint32_t data = 1234587;
for (int n = 0; n<4; n++) {
unsigned char octet = (data >> (n*8)) & 0xFF;
XUartPs_SendByte(STDOUT_BASEADDRESS, octet);
}
}
uint32_t recieveBytes(
{
uint32_t result = 0;
for (int n = 0; n<4; n++)
{
unsigned char octet = getOctet();
uint32_t octet32 = octet;
result != octet32 << (n*8);
}
return result;
}
The point is that by shifting out byte like this, you avoid any problems with endianness. The masking also means that if either end has 32-bit chars (such platforms exist), it all works anyway.
The dataFile.bin is a binary file with 6-byte records. The first 3
bytes of each record contain the latitude and the last 3 bytes contain
the longitude. Each 24 bit value represents radians multiplied by
0X1FFFFF
This is a task I've been working on. I havent done C++ in years so its taking me way longer than I thought it would -_-. After googling around I saw this algorthim which made sense to me.
int interpret24bitAsInt32(byte[] byteArray) {
int newInt = (
((0xFF & byteArray[0]) << 16) |
((0xFF & byteArray[1]) << 8) |
(0xFF & byteArray[2])
);
if ((newInt & 0x00800000) > 0) {
newInt |= 0xFF000000;
} else {
newInt &= 0x00FFFFFF;
}
return newInt;
}
The problem is a syntax issue I am restricting to working by the way the other guy had programmed this. I am not understanding how I can store the CHAR "data" into an INT. Wouldn't it make more sense if "data" was an Array? Since its receiving 24 integers of information stored into a BYTE.
double BinaryFile::from24bitToDouble(char *data) {
int32_t iValue;
// ****************************
// Start code implementation
// Task: Fill iValue with the 24bit integer located at data.
// The first byte is the LSB.
// ****************************
//iValue +=
// ****************************
// End code implementation
// ****************************
return static_cast<double>(iValue) / FACTOR;
}
bool BinaryFile::readNext(DataRecord &record)
{
const size_t RECORD_SIZE = 6;
char buffer[RECORD_SIZE];
m_ifs.read(buffer,RECORD_SIZE);
if (m_ifs) {
record.latitude = toDegrees(from24bitToDouble(&buffer[0]));
record.longitude = toDegrees(from24bitToDouble(&buffer[3]));
return true;
}
return false;
}
double BinaryFile::toDegrees(double radians) const
{
static const double PI = 3.1415926535897932384626433832795;
return radians * 180.0 / PI;
}
I appreciate any help or hints even if you dont understand a clue or hint will help me alot. I just need to talk to someone.
I am not understanding how I can store the CHAR "data" into an INT.
Since char is a numeric type, there is no problem combining them into a single int.
Since its receiving 24 integers of information stored into a BYTE
It's 24 bits, not bytes, so there are only three integer values that need to be combined.
An easier way of producing the same result without using conditionals is as follows:
int interpret24bitAsInt32(byte[] byteArray) {
return (
(byteArray[0] << 24)
| (byteArray[1] << 16)
| (byteArray[2] << 8)
) >> 8;
}
The idea is to store the three bytes supplied as an input into the upper three bytes of the four-byte int, and then shift it down by one byte. This way the program would sign-extend your number automatically, avoiding conditional execution.
Note on portability: This code is not portable, because it assumes 32-bit integer size. To make it portable use <cstdint> types:
int32_t interpret24bitAsInt32(const std::array<uint8_t,3> byteArray) {
return (
(const_cast<int32_t>(byteArray[0]) << 24)
| (const_cast<int32_t>(byteArray[1]) << 16)
| (const_cast<int32_t>(byteArray[2]) << 8)
) >> 8;
}
It also assumes that the most significant byte of the 24-bit number is stored in the initial element of byteArray, then comes the middle element, and finally the least significant byte.
Note on sign extension: This code automatically takes care of sign extension by constructing the value in the upper three bytes and then shifting it to the right, as opposed to constructing the value in the lower three bytes right away. This additional shift operation ensures that C++ takes care of sign-extending the result for us.
When an unsigned char is casted to an int the higher order bits are filled with 0's
When a signed char is casted to a casted int, the sign bit is extended.
ie:
int x;
char y;
unsigned char z;
y=0xFF
z=0xFF
x=y;
/*x will be 0xFFFFFFFF*/
x=z;
/*x will be 0x000000FF*/
So, your algorithm, uses 0xFF as a mask to remove C' sign extension, ie
0xFF == 0x000000FF
0xABCDEF10 & 0x000000FF == 0x00000010
Then uses bit shifts and logical ands to put the bits in their proper place.
Lastly checks the most significant bit (newInt & 0x00800000) > 0 to decide if completing with 0's or ones the highest byte.
int32_t upperByte = ((int32_t) dataRx[0] << 24);
int32_t middleByte = ((int32_t) dataRx[1] << 16);
int32_t lowerByte = ((int32_t) dataRx[2] << 8);
int32_t ADCdata32 = (((int32_t) (upperByte | middleByte | lowerByte)) >> 8); // Right-shift of signed data maintains signed bit
I need to convert little-endian 64 to host byte order. In winapi i can't find such functions, so i need to write my own, can anyone help me? Thanks!
Use htonll. It converts an unsigned __int64 to network byte order.
I think you need to get the host's endiannes first and you can decide after that if you need to convert anything:
#define BIG_ENDIAN 1
#define LITTLE_ENDIAN 0
int getEndiannes()
{
int n = 1;
char *p = &n;
if(*p)
return LITTLE_ENDIAN;
else
return BIG_ENDIAN ;
}
In Linux you can use uint64_t htobe64(uint64_t host_64bits);
Check the man page for more details.
If you're reading external data, the usual solution is to build
up the individual values as specified:
unsigned long long // The only type guaranteed long enough for 64 bits
readData( std::istream& source )
{
unsigned long long results = source.get();
results |= source.get() << 8;
results |= source.get() << 16;
results |= source.get() << 24;
results |= source.get() << 32;
results |= source.get() << 40;
results |= source.get() << 48;
results |= source.get() << 56;
return results;
}
Of course, you really need some sort of error checking, in case
the file ends in the middle of the 8 bytes. (But it is
sufficient to check once, after all of the bytes have been
read.)
If the data is already in a buffer, then just substitute
static_cast<unsigned char>(*p++) for source.get() (where p
points to the position in the buffer). In this case, you also
have to ensure that there are 8 bytes between the initial p
and the end of the buffer before doing the conversion.
I am trying to edit each byte of a buffer by modifying the LSB(Least Significant Bit) according to some requirements.
I am using the unsigned char type for the bytes, so please let me know IF that is correct/wrong.
unsigned char buffer[MAXBUFFER];
Next, i'm using this function
char *uchartob(char s[9], unsigned char u)
which modifies and returns the first parameter as an array of bits. This function works just fine, as the bits in the array represent the second parameter.
Here's where the hassle begins. I am going to point out what I'm trying to do step by step so you guys can let me know where i'm taking the wrong turn.
I am saving the result of the above function (called for each element of the buffer) in a variable
char binary_byte[9]; // array of bits
I am testing the LSB simply comparing it to some flag like above.
if (binary_byte[7]==bit_flag) // i go on and modify it like this
binary_byte[7]=0; // or 1, depending on the case
Next, I'm trying to convert the array of bits binary_byte (it is an array of bits, isn't it?) back into a byte/unsigned char and update the data in the buffer at the same time. I hope I am making myself clear enough, as I am really confused at the moment.
buffer[position_in_buffer]=binary_byte[0]<<7| // actualize the current BYTE in the buffer
binary_byte[1]<<6|
binary_byte[2]<<5|
binary_byte[3]<<4|
binary_byte[4]<<3|
binary_byte[5]<<2|
binary_byte[6]<<1|
binary_byte[7];
Keep in mind that the bit at the position binary_byte[7] may be modified, that's the point of all this.
The solution is not really elegant, but it's working, even though i am really insecure of what i did (I tried to do it with bitwise operators but without success)
The weird thing is when I am trying to print the updated character from the buffer. It has the same bits as the previous character, but it's a completely different one.
My final question is : What effect does changing only the LSB in a byte have? What should I expect?. As you can see, I'm getting only "new" characters even when i shouldn't.
So I'm still a little unsure what you are trying to accomplish here but since you are trying to modify individual bits of a byte I would propose using the following data structure:
union bit_byte
{
struct{
unsigned bit0 : 1;
unsigned bit1 : 1;
unsigned bit2 : 1;
unsigned bit3 : 1;
unsigned bit4 : 1;
unsigned bit5 : 1;
unsigned bit6 : 1;
unsigned bit7 : 1;
} bits;
unsigned char all;
};
This will allow you to access each bit of your byte and still get your byte representation. Here some quick sample code:
bit_byte myValue;
myValue.bits.bit0 = 1; // Set the LSB
// Test the LSB
if(myValue.bits.bit0 == 1) {
myValue.bits.bit7 = 1;
}
printf("%i", myValue.all);
bitwise:
set bit => a |= 1 << x;
reset bit => a &= ~(1 << x);
bit check => a & (1 << x);
flip bit => a ^= (1 << x)
If you can not manage this you can always use std::bitset.
Helper macros:
#define SET_BIT(where, bit_number) ((where) |= 1 << (bit_number))
#define RESET_BIT(where, bit_number) ((where) &= ~(1 << (bit_number)))
#define FLIP_BIT(where, bit_number) ((where) ^= 1 << (bit_number))
#define GET_BIT_VALUE(where, bit_number) (((where) & (1 << (bit_number))) >> bit_number) //this will retun 0 or 1
Helper application to print bits:
#include <iostream>
#include <cstdint>
#define GET_BIT_VALUE(where, bit_number) (((where) & (1 << (bit_number))) >> bit_number)
template<typename T>
void print_bits(T const& value)
{
for(uint8_t bit_count = 0;
bit_count < (sizeof(T)<<3);
++bit_count)
{
std::cout << GET_BIT_VALUE(value, bit_count) << std::endl;
}
}
int main()
{
unsigned int f = 8;
print_bits(f);
}