Toy program to split an integer into 4 bytes and later combine these bytes to get back the input value results into error. However the program works for positive integers. I am interested in signed integers. Need help.
Expected Output: -12345
Actual Output: -57
int main()
{
int j,i = -12345;
char b[4];
b[0] = (i >> 24) & 0xFF;
b[1] = (i >> 16) & 0xFF;
b[2] = (i >> 8) & 0xFF;
b[3] = (i >> 0) & 0xFF;
j = (int)((b[0] << 24) | (b[1] << 16) | (b[2] << 8) | (b[3] << 0));
std::cout << j;
return 0;
}
There are actually two problems that leads to your "error".
The first is that the result of e.g. b[0] << 24 will be an int. When you cast that to a char (and assuming that char is an 8-bit type) then you cut off the top 24 bits of the value, truncating it.
The second problem is that char could be unsigned (it's implementation-defined if char is signed or unsigned). If char is unsigned then the value -1 (0xffffffff) will become 255 (0x000000ff).
When you then bring all that together it will almost certainly result in wrong values.
In general, whenever you feel the need to do a C-style cast (like in (char)(b[0] << 24)) when programming in C++, you should take that as a sign that you're doing something wrong.
One possible way to solve your problem, always work with explicit unsigned data-types.
First you need to copy the original int value to an unsigned int:
unsigned ui;
memcpy(&ui, &i, sizeof ui);
Then use ui instead of i when doing the "split". And explicitly use unsigned char:
unsigned char b[sizeof(unsigned)] = { 0 };
b[0] = (ui >> 24) & 0xFF;
b[1] = (ui >> 16) & 0xFF;
b[2] = (ui >> 8) & 0xFF;
b[3] = (ui >> 0) & 0xFF;
Then to put it all back, again use an explicit unsigned type, and copy it to the resulting variable:
unsigned uj = (b[0] << 24) | (b[1] << 16) | (b[2] << 8) | (b[3] << 0);
memcpy(&j, &uj, sizeof j);
I suggest using unsigned data types here to avoid possible problems that can come from sign-extension during conversion.
Your code works only for possessive numbers! "i" is negative and by shifting it to to right b[0] becomes positive! and finally desensitization results error!
try
int main()
{
int j, i = -12345;
const char* bytes = reinterpret_cast<const char*>(&i);
j = *reinterpret_cast<const int*>(bytes);
std::cout << j;
return 0;
}
I am trying to implement a four byte value (most significant data first) to compute the total length of data. I found a code snippet to compute this but I didn't get a 4 byte data in the output. Instead I only got a 2 byte value.
char bytesLen[4] ;
unsigned int blockSize = 535;
bytesLen[0] = (blockSize & 0xFF);
bytesLen[1] = (blockSize >> 8) & 0xFF;
bytesLen[2] = (blockSize >> 16) & 0xFF;
bytesLen[3] = (blockSize >> 24) & 0xFF;
std::cout << "bytesLen: " << bytesLen << '\n';
Did I missed something in my code?
No, you didn't. You're outputting the array as a C string, which is null terminated. The third byte is nul so only two characters will be shown.
This is not a rational way to output binary values.
Also you're saving least significant byte first, not most significant. For most significant you have to reverse the order of the bytes.
This shows how to do the same thing without shift operators and bitmasks.
#include <iostream>
#include <iomanip>
// C++11
#include <cstdint>
int main(void)
{
// with union, the memory allocation is shared
union {
uint8_t bytes[4];
uint32_t n;
} length;
// see htonl if needs to be in network byte order
// or ntohl if from network byte order to host
length.n = 535;
std::cout << std::hex;
for(int i=0; i<4; i++) {
std::cout << (unsigned int)length.bytes[i] << " ";
}
std::cout << std::endl;
return 0;
}
If you want ms byte first, then you've reversed the order of the bytes.
You get incorrect output because you treat everything as a C string even though it is not. Get rid of the char type and fix the printing.
In C++, it would be like this:
#include <iostream>
#include <cstdint>
int main()
{
uint8_t bytesLen[sizeof(uint32_t)];
uint32_t blockSize = 535;
bytesLen[3] = (blockSize >> 0) & 0xFF;
bytesLen[2] = (blockSize >> 8) & 0xFF;
bytesLen[1] = (blockSize >> 16) & 0xFF;
bytesLen[0] = (blockSize >> 24) & 0xFF;
bool removeZeroes = true;
std::cout << "bytesLen: 0x";
for(size_t i=0; i<sizeof(bytesLen); i++)
{
if(bytesLen[i] != 0)
{
removeZeroes = false;
}
if(!removeZeroes)
{
std::cout << std::hex << (int)bytesLen[i];
}
}
std::cout << std::endl;
return 0;
}
Here's the fixed code [untested]. Note this won't compile as is. You'll need to reorder it slightly, but it should help:
unsigned char bytesLen[4] ;
unsigned int blockSize = 535;
// little endian
#if 0
bytesLen[0] = (blockSize & 0xFF);
bytesLen[1] = (blockSize >> 8) & 0xFF;
bytesLen[2] = (blockSize >> 16) & 0xFF;
bytesLen[3] = (blockSize >> 24) & 0xFF;
// big endian
#else
bytesLen[3] = (blockSize & 0xFF);
bytesLen[2] = (blockSize >> 8) & 0xFF;
bytesLen[1] = (blockSize >> 16) & 0xFF;
bytesLen[0] = (blockSize >> 24) & 0xFF;
#endif
char tmp[9];
char *
pretty_print(char *dst,unsigned char *src)
{
char *hex = "0123456789ABCDEF";
char *bp = dst;
int chr;
for (int idx = 0; idx <= 3; ++idx) {
chr = src[idx];
*bp++ = hex[(chr >> 4) & 0x0F];
*bp++ = hex[(chr >> 0) & 0x0F];
}
*bp = 0;
return dst;
}
std::cout << "bytesLen: " << pretty_print(tmp,bytesLen) << '\n';
UPDATE:
Based upon your followup question, to concatenate binary data, we can not use string-like functions such as sprintf [because the binary data may have 0x00 inside, which would stop the string transfer short]. Also, if the binary data had no 0x00 in it, the string functions would run beyond the end of the array(s) looking for it, and bad things would happen. The string functions also assume signed char data and when dealing with raw binary, we want to use unsigned char.
Here's something to try:
unsigned char finalData[1000]; // size is just example
unsigned char bytesLen[4];
unsigned char blockContent[300];
unsigned char *dst;
dst = finalData;
memcpy(dst,bytesLen,sizeof(bytesLen));
dst += sizeof(bytesLen);
memcpy(dst,blockContent,sizeof(blockContent));
dst += sizeof(blockContent);
// append more if needed in similar way ...
Note: The above presupposes that blockContent is of fixed size. If it were to have a variable number of bytes, we'd need to replace sizeof(blockContent) with (e.g.) bclen where that is the number of bytes in blockContent
So I have a design which incorporates CRC32C checksums to ensure data hasn't been damaged. I decided to use CRC32C because I can have both a software version and a hardware-accelerated version if the computer the software runs on supports SSE 4.2
I'm going by Intel's developer manual (vol 2A), which seems to provide the algorithm behind the crc32 instruction. However, I'm having little luck. Intel's developer guide says the following:
BIT_REFLECT32: DEST[31-0] = SRC[0-31]
MOD2: Remainder from Polynomial division modulus 2
TEMP1[31-0] <- BIT_REFLECT(SRC[31-0])
TEMP2[31-0] <- BIT_REFLECT(DEST[31-0])
TEMP3[63-0] <- TEMP1[31-0] << 32
TEMP4[63-0] <- TEMP2[31-0] << 32
TEMP5[63-0] <- TEMP3[63-0] XOR TEMP4[63-0]
TEMP6[31-0] <- TEMP5[63-0] MOD2 0x11EDC6F41
DEST[31-0] <- BIT_REFLECT(TEMP6[31-0])
Now, as far as I can tell, I've done everything up to the line starting TEMP6 correctly, but I think I may be either misunderstanding the polynomial division, or implementing it incorrectly. If my understanding is correct, 1 / 1 mod 2 = 1, 0 / 1 mod 2 = 0, and both divides-by-zero are undefined.
What I don't understand is how binary division with 64-bit and 33-bit operands will work. If SRC is 0x00000000, and DEST is 0xFFFFFFFF, TEMP5[63-32] will be all set bits, while TEMP5[31-0] will be all unset bits.
If I was to use the bits from TEMP5 as the numerator, there would be 30 divisions by zero as the polynomial 11EDC6F41 is only 33 bits long (and so converting it to a 64-bit unsigned integer leaves the top 30 bits unset), and so the denominator is unset for 30 bits.
However, if I was to use the polynomial as the numerator, the bottom 32 bits of TEMP5 are unset, resulting in divides by zero there, and the top 30 bits of the result would be zero, as the top 30 bits of the numerator would be zero, as 0 / 1 mod 2 = 0.
Am I misunderstanding how this works? Just plain missing something? Or has Intel left out some crucial step in their documentation?
The reason I went to Intel's developer guide for what appeared to be the algorithm they used is because they used a 33-bit polynomial, and I wanted to make outputs identical, which didn't happen when I used the 32-bit polynomial 1EDC6F41 (show below).
uint32_t poly = 0x1EDC6F41, sres, crcTable[256], data = 0x00000000;
for (n = 0; n < 256; n++) {
sres = n;
for (k = 0; k < 8; k++)
sres = (sres & 1) == 1 ? poly ^ (sres >> 1) : (sres >> 1);
crcTable[n] = sres;
}
sres = 0xFFFFFFFF;
for (n = 0; n < 4; n++) {
sres = crcTable[(sres ^ data) & 0xFF] ^ (sres >> 8);
}
The above code produces 4138093821 as an output, and the crc32 opcode produces 2346497208 using the input 0x00000000.
Sorry if this is badly written or incomprehensible in places, it is rather late for me.
Here are both software and hardware versions of CRC-32C. The software version is optimized to process eight bytes at a time. The hardware version is optimized to run three crc32q instructions effectively in parallel on a single core, since the throughput of that instruction is one cycle, but the latency is three cycles.
crc32c.c:
/* crc32c.c -- compute CRC-32C using the Intel crc32 instruction
* Copyright (C) 2013, 2021 Mark Adler
* Version 1.2 5 Jun 2021 Mark Adler
*/
/*
This software is provided 'as-is', without any express or implied
warranty. In no event will the author be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Mark Adler
madler#alumni.caltech.edu
*/
/* Version History:
1.0 10 Feb 2013 First version
1.1 31 May 2021 Correct register constraints on assembly instructions
Include pre-computed tables to avoid use of pthreads
Return zero for the CRC when buf is NULL, as initial value
1.2 5 Jun 2021 Make tables constant
*/
// Use hardware CRC instruction on Intel SSE 4.2 processors. This computes a
// CRC-32C, *not* the CRC-32 used by Ethernet and zip, gzip, etc. A software
// version is provided as a fall-back, as well as for speed comparisons.
#include <stddef.h>
#include <stdint.h>
// Tables for CRC word-wise calculation, definitions of LONG and SHORT, and CRC
// shifts by LONG and SHORT bytes.
#include "crc32c.h"
// Table-driven software version as a fall-back. This is about 15 times slower
// than using the hardware instructions. This assumes little-endian integers,
// as is the case on Intel processors that the assembler code here is for.
static uint32_t crc32c_sw(uint32_t crc, void const *buf, size_t len) {
if (buf == NULL)
return 0;
unsigned char const *data = buf;
while (len && ((uintptr_t)data & 7) != 0) {
crc = (crc >> 8) ^ crc32c_table[0][(crc ^ *data++) & 0xff];
len--;
}
size_t n = len >> 3;
for (size_t i = 0; i < n; i++) {
uint64_t word = crc ^ ((uint64_t const *)data)[i];
crc = crc32c_table[7][word & 0xff] ^
crc32c_table[6][(word >> 8) & 0xff] ^
crc32c_table[5][(word >> 16) & 0xff] ^
crc32c_table[4][(word >> 24) & 0xff] ^
crc32c_table[3][(word >> 32) & 0xff] ^
crc32c_table[2][(word >> 40) & 0xff] ^
crc32c_table[1][(word >> 48) & 0xff] ^
crc32c_table[0][word >> 56];
}
data += n << 3;
len &= 7;
while (len) {
len--;
crc = (crc >> 8) ^ crc32c_table[0][(crc ^ *data++) & 0xff];
}
return crc;
}
// Apply the zeros operator table to crc.
static uint32_t crc32c_shift(uint32_t const zeros[][256], uint32_t crc) {
return zeros[0][crc & 0xff] ^ zeros[1][(crc >> 8) & 0xff] ^
zeros[2][(crc >> 16) & 0xff] ^ zeros[3][crc >> 24];
}
// Compute CRC-32C using the Intel hardware instruction. Three crc32q
// instructions are run in parallel on a single core. This gives a
// factor-of-three speedup over a single crc32q instruction, since the
// throughput of that instruction is one cycle, but the latency is three
// cycles.
static uint32_t crc32c_hw(uint32_t crc, void const *buf, size_t len) {
if (buf == NULL)
return 0;
// Pre-process the crc.
uint64_t crc0 = crc ^ 0xffffffff;
// Compute the crc for up to seven leading bytes, bringing the data pointer
// to an eight-byte boundary.
unsigned char const *next = buf;
while (len && ((uintptr_t)next & 7) != 0) {
__asm__("crc32b\t" "(%1), %0"
: "+r"(crc0)
: "r"(next), "m"(*next));
next++;
len--;
}
// Compute the crc on sets of LONG*3 bytes, making use of three ALUs in
// parallel on a single core.
while (len >= LONG*3) {
uint64_t crc1 = 0;
uint64_t crc2 = 0;
unsigned char const *end = next + LONG;
do {
__asm__("crc32q\t" "(%3), %0\n\t"
"crc32q\t" LONGx1 "(%3), %1\n\t"
"crc32q\t" LONGx2 "(%3), %2"
: "+r"(crc0), "+r"(crc1), "+r"(crc2)
: "r"(next), "m"(*next));
next += 8;
} while (next < end);
crc0 = crc32c_shift(crc32c_long, crc0) ^ crc1;
crc0 = crc32c_shift(crc32c_long, crc0) ^ crc2;
next += LONG*2;
len -= LONG*3;
}
// Do the same thing, but now on SHORT*3 blocks for the remaining data less
// than a LONG*3 block.
while (len >= SHORT*3) {
uint64_t crc1 = 0;
uint64_t crc2 = 0;
unsigned char const *end = next + SHORT;
do {
__asm__("crc32q\t" "(%3), %0\n\t"
"crc32q\t" SHORTx1 "(%3), %1\n\t"
"crc32q\t" SHORTx2 "(%3), %2"
: "+r"(crc0), "+r"(crc1), "+r"(crc2)
: "r"(next), "m"(*next));
next += 8;
} while (next < end);
crc0 = crc32c_shift(crc32c_short, crc0) ^ crc1;
crc0 = crc32c_shift(crc32c_short, crc0) ^ crc2;
next += SHORT*2;
len -= SHORT*3;
}
// Compute the crc on the remaining eight-byte units less than a SHORT*3
// block.
unsigned char const *end = next + (len - (len & 7));
while (next < end) {
__asm__("crc32q\t" "(%1), %0"
: "+r"(crc0)
: "r"(next), "m"(*next));
next += 8;
}
len &= 7;
// Compute the crc for up to seven trailing bytes.
while (len) {
__asm__("crc32b\t" "(%1), %0"
: "+r"(crc0)
: "r"(next), "m"(*next));
next++;
len--;
}
// Return the crc, post-processed.
return ~(uint32_t)crc0;
}
// Check for SSE 4.2. SSE 4.2 was first supported in Nehalem processors
// introduced in November, 2008. This does not check for the existence of the
// cpuid instruction itself, which was introduced on the 486SL in 1992, so this
// will fail on earlier x86 processors. cpuid works on all Pentium and later
// processors.
#define SSE42(have) \
do { \
uint32_t eax, ecx; \
eax = 1; \
__asm__("cpuid" \
: "=c"(ecx) \
: "a"(eax) \
: "%ebx", "%edx"); \
(have) = (ecx >> 20) & 1; \
} while (0)
// Compute a CRC-32C. If the crc32 instruction is available, use the hardware
// version. Otherwise, use the software version.
uint32_t crc32c(uint32_t crc, void const *buf, size_t len) {
int sse42;
SSE42(sse42);
return sse42 ? crc32c_hw(crc, buf, len) : crc32c_sw(crc, buf, len);
}
Code to generate crc32c.h (stackoverflow won't let me post the tables themselves, due to a 30,000 character limit in an answer):
// Generate crc32c.h for crc32c.c.
#include <stdio.h>
#include <stdint.h>
#define LONG 8192
#define SHORT 256
// Print a 2-D table of four-byte constants in hex.
static void print_table(uint32_t *tab, size_t rows, size_t cols, char *name) {
printf("static uint32_t const %s[][%zu] = {\n", name, cols);
size_t end = rows * cols;
size_t k = 0;
for (;;) {
fputs(" {", stdout);
size_t n = 0, j = 0;
for (;;) {
printf("0x%08x", tab[k + n]);
if (++n == cols)
break;
putchar(',');
if (++j == 6) {
fputs("\n ", stdout);
j = 0;
}
putchar(' ');
}
k += cols;
if (k == end)
break;
puts("},");
}
puts("}\n};");
}
/* CRC-32C (iSCSI) polynomial in reversed bit order. */
#define POLY 0x82f63b78
static void crc32c_word_table(void) {
uint32_t table[8][256];
// Generate byte-wise table.
for (unsigned n = 0; n < 256; n++) {
uint32_t crc = ~n;
for (unsigned k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ POLY : crc >> 1;
table[0][n] = ~crc;
}
// Use byte-wise table to generate word-wise table.
for (unsigned n = 0; n < 256; n++) {
uint32_t crc = ~table[0][n];
for (unsigned k = 1; k < 8; k++) {
crc = table[0][crc & 0xff] ^ (crc >> 8);
table[k][n] = ~crc;
}
}
// Print table.
print_table(table[0], 8, 256, "crc32c_table");
}
// Return a(x) multiplied by b(x) modulo p(x), where p(x) is the CRC
// polynomial. For speed, this requires that a not be zero.
static uint32_t multmodp(uint32_t a, uint32_t b) {
uint32_t prod = 0;
for (;;) {
if (a & 0x80000000) {
prod ^= b;
if ((a & 0x7fffffff) == 0)
break;
}
a <<= 1;
b = b & 1 ? (b >> 1) ^ POLY : b >> 1;
}
return prod;
}
/* Take a length and build four lookup tables for applying the zeros operator
for that length, byte-by-byte, on the operand. */
static void crc32c_zero_table(size_t len, char *name) {
// Generate operator for len zeros.
uint32_t op = 0x80000000; // 1 (x^0)
uint32_t sq = op >> 4; // x^4
while (len) {
sq = multmodp(sq, sq); // x^2^(k+3), k == len bit position
if (len & 1)
op = multmodp(sq, op);
len >>= 1;
}
// Generate table to update each byte of a CRC using op.
uint32_t table[4][256];
for (unsigned n = 0; n < 256; n++) {
table[0][n] = multmodp(op, n);
table[1][n] = multmodp(op, n << 8);
table[2][n] = multmodp(op, n << 16);
table[3][n] = multmodp(op, n << 24);
}
// Print the table to stdout.
print_table(table[0], 4, 256, name);
}
int main(void) {
puts(
"// crc32c.h\n"
"// Tables and constants for crc32c.c software and hardware calculations.\n"
"\n"
"// Table for a 64-bits-at-a-time software CRC-32C calculation. This table\n"
"// has built into it the pre and post bit inversion of the CRC."
);
crc32c_word_table();
puts(
"\n// Block sizes for three-way parallel crc computation. LONG and SHORT\n"
"// must both be powers of two. The associated string constants must be set\n"
"// accordingly, for use in constructing the assembler instructions."
);
printf("#define LONG %d\n", LONG);
printf("#define LONGx1 \"%d\"\n", LONG);
printf("#define LONGx2 \"%d\"\n", 2 * LONG);
printf("#define SHORT %d\n", SHORT);
printf("#define SHORTx1 \"%d\"\n", SHORT);
printf("#define SHORTx2 \"%d\"\n", 2 * SHORT);
puts(
"\n// Table to shift a CRC-32C by LONG bytes."
);
crc32c_zero_table(8192, "crc32c_long");
puts(
"\n// Table to shift a CRC-32C by SHORT bytes."
);
crc32c_zero_table(256, "crc32c_short");
return 0;
}
Mark Adler's answer is correct and complete, but those seeking quick and easy way to integrate CRC-32C in their application might find it a little difficult to adapt the code, especially if they are using Windows and .NET.
I've created a library that implements CRC-32C using either hardware or software method depending on available hardware. It's available as a NuGet package for C++ and .NET. It's opensource of course.
Besides packaging Mark Adler's code above, I've found a simple way to improve throughput of the software fallback by 50%. On my computer, the library now achieves 2 GB/s in software and over 20 GB/s in hardware. For those curious, here's the optimized software implementation:
static uint32_t append_table(uint32_t crci, buffer input, size_t length)
{
buffer next = input;
#ifdef _M_X64
uint64_t crc;
#else
uint32_t crc;
#endif
crc = crci ^ 0xffffffff;
#ifdef _M_X64
while (length && ((uintptr_t)next & 7) != 0)
{
crc = table[0][(crc ^ *next++) & 0xff] ^ (crc >> 8);
--length;
}
while (length >= 16)
{
crc ^= *(uint64_t *)next;
uint64_t high = *(uint64_t *)(next + 8);
crc = table[15][crc & 0xff]
^ table[14][(crc >> 8) & 0xff]
^ table[13][(crc >> 16) & 0xff]
^ table[12][(crc >> 24) & 0xff]
^ table[11][(crc >> 32) & 0xff]
^ table[10][(crc >> 40) & 0xff]
^ table[9][(crc >> 48) & 0xff]
^ table[8][crc >> 56]
^ table[7][high & 0xff]
^ table[6][(high >> 8) & 0xff]
^ table[5][(high >> 16) & 0xff]
^ table[4][(high >> 24) & 0xff]
^ table[3][(high >> 32) & 0xff]
^ table[2][(high >> 40) & 0xff]
^ table[1][(high >> 48) & 0xff]
^ table[0][high >> 56];
next += 16;
length -= 16;
}
#else
while (length && ((uintptr_t)next & 3) != 0)
{
crc = table[0][(crc ^ *next++) & 0xff] ^ (crc >> 8);
--length;
}
while (length >= 12)
{
crc ^= *(uint32_t *)next;
uint32_t high = *(uint32_t *)(next + 4);
uint32_t high2 = *(uint32_t *)(next + 8);
crc = table[11][crc & 0xff]
^ table[10][(crc >> 8) & 0xff]
^ table[9][(crc >> 16) & 0xff]
^ table[8][crc >> 24]
^ table[7][high & 0xff]
^ table[6][(high >> 8) & 0xff]
^ table[5][(high >> 16) & 0xff]
^ table[4][high >> 24]
^ table[3][high2 & 0xff]
^ table[2][(high2 >> 8) & 0xff]
^ table[1][(high2 >> 16) & 0xff]
^ table[0][high2 >> 24];
next += 12;
length -= 12;
}
#endif
while (length)
{
crc = table[0][(crc ^ *next++) & 0xff] ^ (crc >> 8);
--length;
}
return (uint32_t)crc ^ 0xffffffff;
}
As you can see, it merely crunches larger block at a time. It needs larger lookup table, but it's still cache-friendly. The table is generated the same way, only with more rows.
One extra thing I explored is the use of PCLMULQDQ instruction to get hardware acceleration on AMD processors. I've managed to port Intel's CRC patch for zlib (also available on GitHub) to CRC-32C polynomial except the magic constant 0x9db42487. If anyone is able to decipher that one, please let me know. After supersaw7's excellent explanation on reddit, I have ported also the elusive 0x9db42487 constant and I just need to find some time to polish and test it.
First of all the Intel's CRC32 instruction serves to calculate CRC-32C (that is uses a different polynomial that regular CRC32. Look at the Wikipedia CRC32 entry)
To use Intel's hardware acceleration for CRC32C using gcc you can:
Inline assembly language in C code via the asm statement
Use intrinsics _mm_crc32_u8, _mm_crc32_u16, _mm_crc32_u32 or _mm_crc32_u64. See Intel Intrinsics Guide for a description of those for the Intel's compiler icc but gcc also implements them.
This is how you would do it with __mm_crc32_u8 that takes one byte at a time, using __mm_crc32_u64 would give further performance improvement since it takes 8 bytes at a time.
uint32_t sse42_crc32(const uint8_t *bytes, size_t len)
{
uint32_t hash = 0;
size_t i = 0;
for (i=0;i<len;i++) {
hash = _mm_crc32_u8(hash, bytes[i]);
}
return hash;
}
To compile this you need to pass -msse4.2 in CFLAGS. Like gcc -g -msse4.2 test.c otherwise it will complain about undefined reference to _mm_crc32_u8.
If you want to revert to a plain C implementation if the instruction is not available in the platform where the executable is running you can use GCC's ifunc attribute. Like
uint32_t sse42_crc32(const uint8_t *bytes, size_t len)
{
/* use _mm_crc32_u* here */
}
uint32_t default_crc32(const uint8_t *bytes, size_t len)
{
/* pure C implementation */
}
/* this will be called at load time to decide which function really use */
/* sse42_crc32 if SSE 4.2 is supported */
/* default_crc32 if not */
static void * resolve_crc32(void) {
__builtin_cpu_init();
if (__builtin_cpu_supports("sse4.2")) return sse42_crc32;
return default_crc32;
}
/* crc32() implementation will be resolved at load time to either */
/* sse42_crc32() or default_crc32() */
uint32_t crc32(const uint8_t *bytes, size_t len) __attribute__ ((ifunc ("resolve_crc32")));
I compare various algorithms here: https://github.com/htot/crc32c
The fastest algorithm has been taken from Intels crc_iscsi_v_pcl.asm assembly code (which is available in a modified form in the linux kernel) and using a C wrapper (crcintelasm.cc) included into this project.
To be able to run this code on 32 bit platforms first it has been ported to C (crc32intelc) where possible, a small amount of inline assembly is required. Certain parts of the code depend on the bitness, crc32q is not available on 32 bits and neither is movq, these are put in macro's (crc32intel.h) with alternative code for 32 bit platforms.
I am trying to convert 32bit png files to 16 bit file formats, I understand how to convert 16 bit file formats between eachother (e.g RGB565 RGBA4444) However I'm not sure how to go about converting from a 32 bit to a 16 bit.
My main questions are: How do I find how the 32 bit pngs stored (are 8 bits each assigned to R,B,G, and A values)?
How do I lost precision but still maintain roughly the same value?
Thanks in advance
You'd be much better off using libpng than implementing this by hand.
I am not familiar with the exact layout of the 32bit png pixel, but assuming it is relatively consistent with other formats you probably want to do something similar to this:
// Get the pixel from the png:
unsigned int pngPixel = getPngPixel();
unsigned char r = (pngPixel & 0xFF000000) >> 24;
unsigned char g = (pngPixel & 0x00FF0000) >> 16;
unsigned char b = (pngPixel & 0x0000FF00) >> 8;
unsigned char a = (pngPixel & 0x000000FF);
// you can collapse this to one line, but for clarity...
// masking off the least significant bits.
unsigned short rgb565Pixel = (r & 0xF8) << 11;
rgb565Pixel |= (g & 0xFC) << 5;
rgb565Pixel |= (b & 0xF8);
// Again you could collapse this down to one line, but for clarity...
// masking off the least significant bits.
unsigned short rgba4Pixel = (r & 0xF0) << 12;
rgba4Pixel |= (g & 0xF0) << 8;
rgba4Pixel |= (b & 0xF0) << 4;
rgba4Pixel |= (a & 0xF0);
Consider this pseudocode.
One could argue that masking off the least significant bits, especially when converting from 8 bit to 4 bit, is not a very good way to convert between the two, and they would be right. You could instead use a conversion function:
unsigned int convertColor(unsigned char c, unsigned int oldMax, unsigned int newMax) {
double oldColor = c;
double percentOfMax = oldColor / oldMax;
return ((unsigned int)(newMax * percentOfMax)) & newMax;
}
// now we can do this
unsigned short rgba4Pixel = convertColor(r, 0xFF, 0x0F) << 12;
rgba4Pixel |= convertColor(g, 0xFF, 0x0F) << 8;
rgba4Pixel |= convertColor(b, 0xFF, 0x0F) << 4;
rgba4Pixel |= convertColor(a, 0xFF, 0x0F);