I have an NSMutableData holding random ASCII bytes.
I would like to shift the values of those bytes by a value (X).
So let us say I have something like this:
02 00 02 4e 00
I now want to increase every single byte by 0x01 to get:
03 01 03 4f 01
What is the best approach for this?
Use mutableBytes to get a pointer to your bytes, and treat them like a normal C array.
uint8_t originalBytes[] = {0x02, 0x00, 0x02, 0x4e, 0x00};
NSMutableData * myData = [NSMutableData dataWithBytes:originalBytes length:5];
uint8_t * bytePtr = [myData mutableBytes];
for(int i = 0; i < [myData length]; i++) {
bytePtr[i] += 0x01;
}
NSLog(#"%#", myData);
There's more info in the Binary Data Programming Guide article, "Working With Mutable Binary Data".
Also, what you're doing is not "shifting" but merely adding 0x01. "Shifting" typically refers to "bit shifting".
something like this:
NSString *_chars = #"abcd";
NSMutableData *_data = [NSMutableData dataWithBytes:[_chars UTF8String] length:_chars.length];
NSLog(#"Bef. : %#", _data);
for (int i = 0; i < _data.length; i++) ((char *)[_data mutableBytes])[i]++;
NSLog(#"Aft. : %#", _data);
the log shows the result:
Bef. : <61626364>
Aft. : <62636465>
Related
While working with the following PDF, there is an example in
Section 4: CRC-16 Code and Example
(page 95 or 91) that shows a serial packet with a CRC16 value of 133 (LSB) and 24 (MSB).
However, I have tried different calculators, for example:
Lammert
Elaborate calculator
CRC calc
but I cannot get the CRC16 values that the PDF indicates, regardless of the byte combination I use.
How can I correctly calculate the CRC16 in the example, preferably using one of these calculators? (otherwise, C/C++ code should work).
Thanks.
This particular CRC is CRC-16/ARC. crcany generates the code for this CRC, which includes this simple bit-wise routine:
#include <stddef.h>
#include <stdint.h>
uint16_t crc16arc_bit(uint16_t crc, void const *mem, size_t len) {
unsigned char const *data = mem;
if (data == NULL)
return 0;
for (size_t i = 0; i < len; i++) {
crc ^= data[i];
for (unsigned k = 0; k < 8; k++) {
crc = crc & 1 ? (crc >> 1) ^ 0xa001 : crc >> 1;
}
}
return crc;
}
The standard interface is to do crc = crc16arc_bit(0, NULL, 0); to get the initial value (zero in this case), and then crc = crc16arc_bit(crc, data, len); with successive portions of the message to compute the CRC.
If you do that on the nine-byte message in the appendix, {1, 2, 1, 0, 17, 3, 'M', 'O', 'C'}, the returned CRC is 0x1885, which has the least significant byte 133 in decimal and most significant byte 24 in decimal.
Faster table-driven routines are also generated by crcany.
If you give 01 02 01 00 11 03 4d 4f 43 as hex to Lammert Bies' calculator, the very first one, called "CRC-16" gives 0x1885.
If you give 0102010011034d4f43 to crccalc.com and hit Calc-CRC16, the second line is CRC-16/ARC, with the result 0x1885.
I'm having issues trying to set a char variable to a hexadecimal value.
I'm trying to implement a print function where I print the alignment of a structure. The "aa" is suppose to represent the padding, but I can't seem to set the variable in the default constructor.
Output that it should be
0x00: 00 00 00 00
0x04: 00 aa aa aa
.h file
struct A
{
int a0;
char a1;
char pad0;
char pad1;
char pad2;
A() {
a0 = 0;
a1 = 0;
pad1 = 0xAA;
pad2 = 0xAA;
}
};
.cpp
Alignment::Alignment:Print(void *data)
{
int d {0 };
for (int i = 0; i <2; ++i) {
printf("\n0x%02x:", (d));
d = d + 4;
for (int j = 0; j < sizeof(data); ++j)
{
printf(" %.2x", (*((unsigned char*)data+j )));
}
}
}
Main
A *pa = new A;
Pa is passed into the function
My Output
0x00: 00 00 00 00
0x04: 00 00 00 00
There are a few issues with your Print function. Some are "just" style, but they make fixing the real issue harder. (Also, you jumped to conclusions, possibly blinding you to the real issue.) First off, try to explain why the line
printf(" %.2x", (*((unsigned char*)data+j )));
is supposed to do something different just because i and d changed. Neither of those variables are in this line, so you get the same output with each iteration of the outer loop. The problem is not in the setting of data, but in the reading of it. For each line of output, you print the first four bytes of data when you intended to print the next four bytes.
To get the next four bytes, you need to add d to the pointer, but this is problematic because you increased d earlier in this iteration. Better would be to have d keep the same value throughout each iteration. You could increase it at the end of the iteration instead of the middle, but even slicker might be to use d to control the loop instead of i.
Finally, a bit of robustness: your code uses the magic number 4 when increasing d, which only works if sizeof(data) is 4. Fragile. Better would be to use a symbolic constant for this magic value to ensure consistency. I'll set it explicitly to 4 since I don't see why the size of a pointer should impact how the pointed-to data is displayed.
Alignment::Alignment::Print(void *data)
{
constexpr int BytesPerLine = 4; // could use sizeof(void *) instead of 4
const unsigned char * bytes = static_cast<unsigned char*>(data); // Taking the conversion out of the loop.
for (int d = 0; d < 2*BytesPerLine; d += BytesPerLine) {
printf("\n0x%02x:", (d));
for (int j = 0; j < BytesPerLine; ++j)
{
printf(" %.2x", *(bytes + d + j)); // Add d here!
}
}
}
I'm trying to shift an array of unsigned char to the right with some binary 1.
Example: 0000 0000 | 0000 1111 that I shift 8 times will give me 0000 1111 | 1111 1111 (left shift in binary)
So in my array I will get: {0x0F, 0x00, 0x00, 0x00} => {0xFF, 0x0F, 0x00, 0x00} (right shift in the array)
I currently have this using the function memmove:
unsigned char * dataBuffer = {0x0F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
unsigned int shift = 4;
unsigned length = 8;
memmove(dataBuffer, dataBuffer - shift, length + shift);
for(int i = 0 ; i < 8 ; i++) printf("0x%X ", dataBuffer[i]);
Output: 0x0 0x0 0x0 0x0 0xF 0x0 0x0 0x0
Expected output: 0xFF 0x0 0x0 0x0 0x0 0x0 0x0 0x0
As you can see, I managed to shift my array only element by element and I don't know how to replace the 0 with 1. I guess that using memset could work but I can't use it correctly.
Thanks for your help!
EDIT: It's in order to fill a bitmap zone of an exFAT disk. When you write a cluster in a disk, you have to set the corresponding bit of the bitmap to 1 (first cluster is first bit, second cluster is second bit, ...).
A newly formatted drive will contain 0x0F in the first byte of the bitmap so the proposed example corresponds to my needs if I write 8 clusters, I'll need to shift the value 8 times and fill it with 1.
In the code, I write 4 cluster and need to shift the value by 4 bits but it is shifted by 4 bytes.
Setting the question as solved, it isn't possible to do what I want. Instead of shifting the bits of an array, I need to shift each byte of the array separately.
Setting the question as solved, it isn't possible to do what I want. Instead of shifting the bits of an array, I need to edit each bit of the array separately.
Here's the code if it can help anyone else:
unsigned char dataBuffer[11] = {0x0F, 0x00, 0x00, 0x00, 0, 0, 0, 0};
unsigned int sizeCluster = 6;
unsigned int firstCluster = 4;
unsigned int bitIndex = firstCluster % 8;
unsigned int byteIndex = firstCluster / 8;
for(int i = 0 ; i < sizeCluster; i++){
dataBuffer[byteIndex] |= 1 << bitIndex;
//printf("%d ", bitIndex);
//printf("%d \n\r", byteIndex);
bitIndex++;
if(bitIndex % 8 == 0){
bitIndex = 0;
byteIndex++;
}
}
for(int i = 0 ; i < 10 ; i++) printf("0x%X ", dataBuffer[i]);
OUTPUT: 0xFF 0x3 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
sizeCluster is the number of clusters I want to add in the Bitmap
firstCluster is the first cluster where I can write my data (4 clusters are used: 0, 1, 2, and 3 so I start at 4).
bitIndex is used to modify the right bit in the byte of the array => increments each time.
byteIndex is used to modify the right byte of the array => increments each time the bit is equal to 7.
In case you don't want to use C++ std::bitset for performance reasons, then your code can be rewrote like this:
#include <cstdio>
#include <cstdint>
// buffer definition
constexpr size_t clustersTotal = 83;
constexpr size_t clustersTotalBytes = (clustersTotal+7)>>3; //ceiling(n/8)
uint8_t clustersSet[clustersTotalBytes] = {0x07, 0};
// clusters 0,1 and 2 are already set (for show of)
// helper constanst bit masks for faster bit setting
// could be extended to uint64_t and array of qwords on 64b architectures
// but I couldn't be bothered to write all masks by hand.
// also I wonder when the these lookup tables would be large enough
// to disturb cache locality, so shifting in code would be faster.
const uint8_t bitmaskStarting[8] = {0xFF, 0xFE, 0xFC, 0xF8, 0xF0, 0xE0, 0xC0, 0x80};
const uint8_t bitmaskEnding[8] = {0x01, 0x03, 0x07, 0x0F, 0x1F, 0x3F, 0x7F, 0xFF};
constexpr uint8_t bitmaskFull = 0xFF;
// Input values
size_t firstCluster = 6;
size_t sizeCluster = 16;
// set bits (like "void setBits(size_t firstIndex, size_t count);" )
auto lastCluster = firstCluster + sizeCluster - 1;
printf("From cluster %d, size %d => last cluster is %d\n",
firstCluster, sizeCluster, lastCluster);
if (0 == sizeCluster || clustersTotal <= lastCluster)
return 1; // Invalid input values
auto firstClusterByte = firstCluster>>3; // div 8
auto firstClusterBit = firstCluster&7; // remainder
auto lastClusterByte = lastCluster>>3;
auto lastClusterBit = lastCluster&7;
if (firstClusterByte < lastClusterByte) {
// Set the first byte of sequence (by mask from lookup table (LUT))
clustersSet[firstClusterByte] |= bitmaskStarting[firstClusterBit];
// Set bytes between first and last (simple 0xFF - all bits set)
while (++firstClusterByte < lastClusterByte)
clustersSet[firstClusterByte] = bitmaskFull;
// Set the last byte of sequence (by mask from ending LUT)
clustersSet[lastClusterByte] |= bitmaskEnding[lastClusterBit];
} else { //firstClusterByte == lastClusterByte special case
// Intersection of starting/ending LUT masks is set
clustersSet[firstClusterByte] |=
bitmaskStarting[firstClusterBit] & bitmaskEnding[lastClusterBit];
}
for(auto i = 0 ; i < clustersTotalBytes; ++i)
printf("0x%X ", clustersSet[i]); // Your debug display of buffer
Unfortunately I didn't profile any of the versions (yours vs my), so I have no idea what is the quality of optimized C compiler output in both cases. In the ages of lame C compilers and 386-586 processors my version would be much faster. With modern C compiler the LUT usage can be a bit counterproductive, but unless somebody proves me wrong by some profiling results, I still think my version is much more efficient.
That said, as writing to file system is probably involved ahead of this, setting bits will probably take about %0.1 of CPU time even with your variant, I/O waiting will be major factor.
So I'm posting this more like an example how things can be done in different way.
Edit:
Also if you believe in the clib optimization, the:
// Set bytes between first and last (simple 0xFF - all bits set)
while (++firstClusterByte < lastClusterByte)
clustersSet[firstClusterByte] = bitmaskFull;
Can reuse clib memset magic:
//#include <cstring>
// Set bytes between first and last (simple 0xFF - all bits set)
if (++firstClusterByte < lastClusterByte)
memset(clustersSet, bitmaskFull, (lastClusterByte - firstClusterByte));
I've been struggling with this one for a while now. I'm trying to read a file with hexadecimal information in it, lets say the contents of the file looks like this.
00 00 e0 3a 12 16 00 ff fe 98 c4 cc ce 14 0e 0a aa cf
I'm looking for a method of reading each 'byte' of information (whilst ignoring whitespace) in C++.
Since the code I'm writing is part of a class I've created a small program to demonstrate what I'm currently doing.
int main(void)
{
int i = 0;
unsigned char byte;
unsigned char Memory[50];
ifstream f_in;
f_in.open("file_with_hex_stuff.txt");
f_in >> skipws; // Skip Whitespace
while(f_in >> hex >> byte)
{
Memory[i++] = byte;
}
return 0;
}
(apologies if the code above doesn't compile, it's just so you can get a feel for what I'm wanting.
I expected the array to be like this:
Memory[0] => 0x00;
Memory[1] => 0x00;
Memory[2] => 0xe0;
Memory[3] => 0x3a;
Memory[4] => 0x12;
etc....
But instead it loads each of the numbers into it's own position in the array like this:
Memory[0] => 0x00; // 2 LOCATIONS USED FOR 0x00
Memory[1] => 0x00; //
Memory[2] => 0x00; // 2 LOCATIONS USED FOR 0x00
Memory[3] => 0x00; //
Memory[4] => 0x0e; // 2 LOCATIONS USED FOR 0xe0
Memory[5] => 0x00; //
Memory[6] => 0x03; // 2 LOCATIONS USED FOR 0x3a
Memory[7] => 0x0a; //
etc...
Applogies if this post makes no sense. Any feedback appreciated.
Thanks.
From your output, your program is reading one hex digit at a time.
Try reading into an unsigned int instead:
unsigned int value;
while (f_in >> hex >> value)
{
byte = ((unsigned char) (value & 0xff);
Memory[i++] = byte;
//...
}
Also, try using a debugger to explore your code. For example, you could add value and byte as watch variables and single step through your code.
Demo.
This HMACSHA1 code below works for converting "Password" and "Message" to AFF791FA574D564C83F6456CC198CBD316949DC9 as evidence by http://buchananweb.co.uk/security01.aspx.
My question is, Is it possible to have:
BYTE HMAC[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64};
BYTE data2[] = {0x4D,0x65,0x73,0x73,0x61,0x67,0x65};
And still get the same value: AFF791FA574D564C83F6456CC198CBD316949DC9.
For example, if I was on a server and received the packet:
[HEADER] 08 50 61 73 73 77 6F 72 64 00
[HEADER] 07 4D 65 73 73 61 67 65 00
And I rip 50 61 73 73 77 6F 72 64 & 4D 65 73 73 61 67 65 from the packet and used this for my HMACSHA1. How would I go about doing that to get the correct HMACSHA1 value?
BYTE HMAC[] = "Password";
BYTE data2[] = "Message";
//BYTE HMAC[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64};
//BYTE data2[] = {0x4D,0x65,0x73,0x73,0x61,0x67,0x65};
HMAC_CTX ctx;
result = (unsigned char*) malloc(sizeof(char) * result_len);
ENGINE_load_builtin_engines();
ENGINE_register_all_complete();
HMAC_CTX_init(&ctx);
HMAC_Init_ex(&ctx, HMAC, strlen((const char*)HMAC), EVP_sha1(), NULL);
HMAC_Update(&ctx, data2, strlen((const char*)(data2)));
HMAC_Final(&ctx, result, &result_len);
HMAC_CTX_cleanup(&ctx);
std::cout << "\n\n";
for(int i=0;i<result_len;i++)
std::cout << setfill('0') << setw(2) << hex << (int)result[i];
int asd;
std::cin >> asd;
// AFF791FA574D564C83F6456CC198CBD316949DC9
EDIT:
It works by doing this:
BYTE HMAC[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64, 0x00};
BYTE data2[] = {0x4D,0x65,0x73,0x73,0x61,0x67,0x65, 0x00};
By adding 0x00, at the end. But, my question is more towards ripping it from data, and using it... would it still be fine?
The issue is the relation ship between arrays, strings, and the null char.
When you declare "Password", the compiler logically treats the string literal as a nine byte array, {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64, 0x00}. When you call strlen, it will count the number of bytes until it encounters the first 0x00. strlen("Password") will return 8 even though there are technically nine characters in the array of characters.
So when you declare an array of 8 bytes as follows without a trailing null byte:
BYTE HMAC[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64};
The problem is that "strlen(HMAC)" will count at least 8 bytes, and keep counting while traversing undefined memory until it finally (if ever) hits a byte that is zero. At best, you might get lucky because the stack memory always has a zero byte padding your array declaration. More likely it will return a value much larger than 8. Maybe it will crash.
So when you parse the HMAC and MESSAGE field from your protocol packet, you count the number of bytes actually parsed (not including the terminating null). And pass that count into the hmac functions to indicate the size of your data.
I don't know your protocol code, but I hope you aren't using strlen to parse the packet to figure out where the string inside the packet ends. A clever attacker could inject a packet with no null terminator and cause your code do bad things. I hope you are parsing securely and carefully. Typical protocol code doesn't include the null terminating byte in the strings packed inside. Usually the "length" is encoded as an integer field followed by the string bytes. Makes it easier to parse and determine if the length would exceed the packet size read in.