I'm trying to control velocity in a motor via writing register value in microcontroller.
unsigned long PrintHex32( uint32_t data) // prints 32-bit data in hex with leading zeroes
{
uint32_t data2 = data << 8;
char tmp[16];
uint16_t LSB = data2 & 0xffff;
uint16_t MSB = data2 >> 16;
unsigned long ppsval2 = sprintf(tmp, "0x%.4X%.4X%", MSB, LSB);
Serial.println(tmp);
Serial.println("***************");
return tmp;
}
void NoRamp() {
Serial.println("No Ramp");
unsigned long ppsVal = (VMAX * FS * uS);
unsigned long ppsVal3 = PrintHex32(ppsVal);
Serial.println(ppsVal);
Serial.println(ppsVal3);
Serial.println("$$$$$$");
//********* NO Ramp **********////////
sendData(0xA0, 0x00000000); //RAMP Mode
// sendData(0xA4, 0x03E80000); //VMAX-5rps/5hz
// sendData(0xA4, 0x00c80000);
sendData(0xA4, ppsVal3); //VMAX-1rps/1hz
}
at the end i need to be send data format after hex conversion sendData(0xA4, 0x00c80000)
but currently, I'm getting my print output:
No Ramp
0x00C80000
***************
51200
0
$$$$$$
umm where do I making the mistake? Can anyone kindly slam my head a bit please!
Thanks heaps!!
PS: also senddata method take below argument if anyone needs to know!
unsigned long sendData(unsigned long address, unsigned long datagram)
Edit:
I think I can understand the proper question to ask?
unsigned long PrintHex32( uint32_t data) // prints 32-bit data in hex with leading zeroes
{
uint32_t data2 = data << 8;
//char tmp[16];
uint16_t LSB = data2 & 0xffff;
uint16_t MSB = data2 >> 16;
unsigned long val = xxxx(uint16_t LSB) + uint16_t MSB(YYY);
//sprintf(tmp, "0x%.4X%.4X%", MSB, LSB);
return val ;
}
if you can see im converting my input data2 to hex using uint16_t LSB and uint16_t MSB. how do i save that two value as one single unsigned long val . then I can return the that varible.
I think using sprintf method is wrong as it does show just char representation?
please kindly correct me this if im wrong?
The xxxx in the edit should probably be 2^16 aka 65536:
#include <cassert>
using namespace std;
int main(int argc, char * argv[]) {
unsigned int data2 = 12345678;
uint16_t LSB = data2 & 0xffff;
uint16_t MSB = data2 >> 16;
unsigned long val = (1<<16) * MSB + LSB;
unsigned long val2 = 65536 * MSB + LSB;
assert(val == data2);
assert(val2 == data2);
}
Related
I would like some help as I seem to be unable to calculate the CRC for a single byte, let alone an array of bytes.
Below is the test code that I have been trying to get work. Please note that I used the code from this post, and the example with the array of chars works, but I need to be able to adapt this to work for an array of uint8_t.
When going through the fundamentals of CRC, this seems like it should work but I may be mistaken.
I have been trying to validate the function by checking the outputted remainder (crc) using this site.
If someone could kindly identify where the issue is coming round which is leading to the wrong CRC value being calculated, and educate me as to why this is happening, that would be highly appreciated!
#include <cstdint>
#include <iostream>
// Below 2 functions take from
// https://stackoverflow.com/questions/59486262/issues-calculating-crc16-mcrf4xx-for-more-than-1-byte
int16_t Utils_CRC16_MCRF4XX( uint16_t Crc, uint8_t Byte )
{
Crc ^= Byte ;
for( uint8_t i = 0; i < 8; i++ )
{
Crc = (Crc & 0x0001) != 0 ? (Crc >> 1) ^ 0x8408 :
Crc >> 1 ;
}
return Crc ;
}
uint16_t crc( uint8_t* pData, uint32_t Len )
{
uint16_t Crc = 0xffff;
for( uint32_t i = 0; i < Len; i++ )
{
Crc = Utils_CRC16_MCRF4XX( Crc, pData[i] );
}
return (Crc);
}
int main()
{
int8_t val1 = 30;
int8_t val2 = 30;
// Arbitrary Test Message
uint8_t message[] =
{
(uint8_t)54
};
/*
// Sample Test Message (Actually what I intend to transmit)
uint8_t message[] =
{
(uint8_t)250,
(uint8_t)7,
(uint8_t)21,
(uint8_t)2,
(uint8_t)2,
(uint8_t)val1,
(uint8_t)val2
};
*/
//uint8_t test[] = "123456789";
uint16_t remainder = crc(message, sizeof(message));
// Expected DEC: 28561
// Actual DEC: 23346
std::cout << "CRC: " << remainder << std::endl;
return 0;
}
If you do this:
uint8_t test[] = "123456789";
uint16_t remainder = crc(test, sizeof(test)-1);
then you get the expected result, 28561.
You need to subtract 1 from sizeof(test) to exclude the terminating 0.
I'm supposed to pack some shorts into a 32 bit integer. It's a homework assignment that will lead into a larger idea of compression/decompression.
I don't have any problems understanding how to pack the shorts into an integer, but I am struggling to understand how to get each short value stored within the integer.
So, for example, I store the values 2, 4, 6, 8 into the integer. That means I want to print them in the same order I input them.
How do you go about getting these values out from the integer?
EDIT: Shorts in this context refers to an unsigned two-byte integer.
As Craig corrected me, short is a 16 bit variable, therfore only 2 shorts can fit in one int, so here's my answer retrieving shorts:
2|4
000000000000001|00000000000000100
00000000000000100000000000000100
131076
denoting first as the left-most variable and last as the right-most variable, getting the short variable would like like this:
int x = 131076; //00000000000000100000000000000100 in binary
short last = x & 65535; // 65535= 1111111111111111
short first= (x >> 16) & 65535;
and here's my previous answer fixed for compressing chars (8 bit variables):
Let's assume the first char is the one the start on the MSB and the last one is the one that ends on the LSB:
2|4|6|8
00000001|00000010|000000110|00001000
000000010000001000000011000001000
33818120
So, in this example the first char is 2 (0010), followed by 4 (0100), 6 (0110) and last: 8 (1000).
so to get the compressed numbers back one could use this code:
int x = 33818120; //00000010000001000000011000001000 in binary
char last = x & 255; // 255= 11111111
char third = (x >> 8) & 255;
char second = (x >> 16) & 255;
char last = (x >> 24) & 255;
This would be more interesting with char [as you'd get 4]. But, you can only pack two shorts into a single int. So, I'm a bit mystified by this as to what the instructor was thinking.
Consider a union:
union combo {
int u_int;
short u_short[2];
char u_char[4];
};
int
getint1(short s1,short s2)
{
union combo combo;
combo.u_short[0] = s1;
combo.u_short[1] = s2;
return combo.u_int;
}
short
getshort1(int val,int which)
{
union combo combo;
combo.u_int = val;
return combo.u_short[which];
}
Now consider encoding/decoding with shifts:
unsigned int
getint2(unsigned short s1,unsigned short s2)
{
unsigned int val;
val = s1;
val <<= 16;
val |= s2;
return val;
}
unsigned short
getshort2(unsigned int val,int which)
{
val >>= (which * 16);
return val & 0xFFFF;
}
The unsigned code above will probably do what you want.
But, the next uses signed values and probably won't work as well because you may have mixed signs between s1/s2 and encoding/decoding that may present problems
int
getint3(short s1,short s2)
{
int val;
val = s1;
val <<= 16;
val |= s2;
return val;
}
short
getshort3(int val,int which)
{
val >>= (which * 16);
return val & 0xFFFF;
}
I have a string of 256*4 bytes of data. These 256* 4 bytes need to be converted into 256 unsigned integers. The order in which they come is little endian, i.e. the first four bytes in the string are the little endian representation of the first integer, the next 4 bytes are the little endian representation of the next integer, and so on.
What is the best way to parse through this data and merge these bytes into unsigned integers? I know I have to use bitshift operators but I don't know in what way.
Hope this helps you
unsigned int arr[256];
char ch[256*4] = "your string";
for(int i = 0,k=0;i<256*4;i+=4,k++)
{
arr[k] = ch[i]|ch[i+1]<<8|ch[i+2]<<16|ch[i+3]<<24;
}
Alternatively, we can use C/C++ casting to interpret a char buffer as an array of unsigned int. This can help get away with shifting and endianness dependency.
#include <stdio.h>
int main()
{
char buf[256*4] = "abcd";
unsigned int *p_int = ( unsigned int * )buf;
unsigned short idx = 0;
unsigned int val = 0;
for( idx = 0; idx < 256; idx++ )
{
val = *p_int++;
printf( "idx = %d, val = %d \n", idx, val );
}
}
This would print out 256 values, the first one is
idx = 0, val = 1684234849
(and all remaining numbers = 0).
As a side note, "abcd" converts to 1684234849 because it's run on X86 (Little Endian), in which "abcd" is 0x64636261 (with 'a' is 0x61, and 'd' is 0x64 - in Little Endian, the LSB is in the smallest address). So 0x64636261 = 1684234849.
Note also, if using C++, reinterpret_cast should be used in this case:
const char *p_buf = "abcd";
const unsigned int *p_int = reinterpret_cast< const unsigned int * >( p_buf );
If your host system is little-endian, just read along 4 bytes, shift properly and copy them to int
char bytes[4] = "....";
int i = bytes[0] | (bytes[1] << 8) | (bytes[2] << 16) | (bytes[3] << 24);
If your host is big-endian, do the same and reverse the bytes in the int, or reverse it on-the-fly while copying with bit-shifting, i.e. just change the indexes of bytes[] from 0-3 to 3-0
But you shouldn't even do that just copy the whole char array to the int array if your PC is in little-endian
#define LEN 256
char bytes[LEN*4] = "blahblahblah";
unsigned int uint[LEN];
memcpy(uint, bytes, sizeof bytes);
That said, the best way is to avoid copying at all and use the same array for both types
union
{
char bytes[LEN*4];
unsigned int uint[LEN];
} myArrays;
// copy data to myArrays.bytes[], do something with those bytes if necessary
// after populating myArrays.bytes[], get the ints by myArrays.uint[i]
I am trying to implement the algorithm of a CRC check, which basically created a value, based on an input message.
So, consider I have a hex message 3F214365876616AB15387D5D59, and I want to obtain the CRC24Q value of the message.
The algorithm that I found to do this is the following:
typedef unsigned long crc24;
crc24 crc_check(unsigned char *input) {
unsigned char *octets;
crc24 crc = 0xb704ce; // CRC24_INIT;
int i;
int len = strlen(input);
octets = input;
while (len--) {
crc ^= ((*octets++) << 16);
for (i = 0; i < 8; i++) {
crc <<= 1;
if (crc & 0x1000000)
crc ^= CRC24_POLY;
}
}
return crc & 0xFFFFFF;
}
where *input=3F214365876616AB15387D5D59.
The problem is that ((*octets++) << 16) will shift by 16 bits the ascii value of the hex character and not the character itself.
So, I made a function to convert the hex numbers to characters.
I know the implementation looks weird, and I wouldn't be surprised if it were wrong.
This is the convert function:
char* convert(unsigned char* message) {
unsigned char* input;
input = message;
int p;
char *xxxx[20];
xxxx[0]="";
for (p = 0; p < length(message) - 1; p = p + 2) {
char* pp[20];
pp[0] = input[0];
char *c[20];
*input++;
c[0]= input[0];
*input++;
strcat(pp,c);
char cc;
char tt[2];
cc = (char ) strtol(pp, &pp, 16);
tt[0]=cc;
strcat(xxxx,tt);
}
return xxxx;
}
SO:
unsigned char *msg_hex="3F214365876616AB15387D5D59";
crc_sum = crc_check(convert((msg_hex)));
printf("CRC-sum: %x\n", crc_sum);
Thank you very much for any suggestions.
Shouldn't the if (crc & 0x8000000) be if (crc & 0x1000000) otherwise you're testing the 28th bit not the 25th for 24-bit overflow
I am trying to write some processor independent code to write some files in big endian. I have a sample of code below and I can't understand why it doesn't work. All it is supposed to do is let byte store each byte of data one by one in big endian order. In my actual program I would then write the individual byte out to a file, so I get the same byte order in the file regardless of processor architecture.
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
long bitmask = (0xFF << (sizeof(long) - 1) * 8);
char byte = 0;
for(long i = 0; i < sizeof(long); i++) {
byte = data & bitmask;
data <<= 8;
}
return 0;
}
For some reason byte always has the value of 0. This confuses me, I am looking at the debugger and see this:
data = 00010010001101000101011001111000
bitmask = 11111111000000000000000000000000
I would think that data & mask would give 00010010, but it just makes byte 00000000 every time! How can his be? I have written some code for the little endian order and this works great, see below:
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
long bitmask = 0xFF;
char byte = 0;
for(long i = 0; i < sizeof(long); i++) {
byte = data & bitmask;
data >>= 8;
}
return 0;
}
Why does the little endian one work and the big endian not? Thanks for any help :-)
You should use the standard functions ntohl() and kin for this. They operate on explicit sized variables (i.e. uint16_t and uin32_t) rather than compiler-specific long, which necessary for portability.
Some platforms provide 64-bit versions in <endian.h>
In your example, data is 0x12345678.
Your first assignment to byte is therefore:
byte = 0x12000000;
which won't fit in a byte, so it gets truncated to zero.
try:
byte = (data & bitmask) >> (sizeof(long) - 1) * 8);
You're getting the shifting all wrong.
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
int shift = (sizeof(long) - 1) * 8
const unsigned long mask = 0xff;
char byte = 0;
for (long i = 0; i < sizeof(long); i++, shift -= 8) {
byte = (data & (mask << shift)) >> shift;
}
return 0;
}
Now, I wouldn't recommend you do things this way. I would recommend instead writing some nice conversion functions. Many compilers have these as builtins. So you can write your functions to do it the hard way, then switch them to just forward to the compiler builtin when you figure out what it is.
#include <tr1/cstdint> // To get uint16_t, uint32_t and so on.
inline uint16_t to_bigendian(uint16_t val, char bytes[2])
{
bytes[0] = (val >> 8) & 0xffu;
bytes[1] = val & 0xffu;
}
inline uint32_t to_bigendian(uint32_t val, char bytes[4])
{
bytes[0] = (val >> 24) & 0xffu;
bytes[1] = (val >> 16) & 0xffu;
bytes[2] = (val >> 8) & 0xffu;
bytes[3] = val & 0xffu;
}
This code is simpler and easier to understand than your loop. It's also faster. And lastly, it is recognized by some compilers and automatically turned into the single byte swap operation that would be required on most CPUs.
because you are masking off the top byte from an integer and then not shifting it back down 24 bits ...
Change your loop to:
for(long i = 0; i < sizeof(long); i++) {
byte = (data & bitmask) >> 24;
data <<= 8;
}