I have below function that supports for conversion of LPCTSTR to BYTE , but the input str only support digits as of now.
void StrToByte2(LPCTSTR str, BYTE *dest)
{
UINT count = _ttoi(str);
BYTE buf[4] = { 0 };
char string[10] = { 0 };
sprintf_s(string, 10, "%04d", count);
for (int i = 0; i < 4; ++i)
{
if ((string[i] >= '0') && (string[i] <= '9'))
buf[i] = string[i] - '0';
}
dest[0] = (BYTE)(buf[0] << 4) | buf[1];
dest[1] = (BYTE)(buf[2] << 4) | buf[3];
}
If i call this function on "1234" ( any digits) , dest output some 12814,
struct st
{
byte btID[2];
int nID;
};
PTR ptr(new st);
StrToByte2(strCode, ptr->btID);
but when i call this function on any hexadecimal ex A123 , it outputs 0000 always.
Below function is used to convert back the dest code to str
CString Byte2ToStr(const byte* pbuf)
{
CString str;
str.Format(_T("%02X%02X"), pbuf[0], pbuf[1]);
return str;
}
How can i get A123 to converted to bytes and than back to str to display A123??
Please help!!
PTR ptr(new st);
This is a memory leak in C++, because new st allocates memory and there is no way to release it.
UINT count = _ttoi(str);
...
sprintf_s(string, 10, "%04d", count);
This is converting string to integer, then converts integer back to string. It doesn't seem to have a real purpose.
For example, "1234" is converted to 1234, and back to "1234". But "A123" is not a valid number so it is converted to 0, then converted to "0000". So this method fails. You can just work with the original string.
It seems this function tries to fit 2 integers in to 1 byte. This can be done as long as each value is less than 16 or 0xF (I don't know what purpose this might have) It can be fixed as follows:
void StrToByte2(const wchar_t* str, BYTE *dest)
{
int len = wcslen(str);
if(len != 4)
return; //handle error
char buf[4] = { 0 };
for(int i = 0; i < 4; ++i)
if(str[i] >= L'0' && str[i] <= L'9')
buf[i] = (BYTE)(str[i] - L'0');
dest[0] = (buf[0] << 4) + buf[1];
dest[1] = (buf[2] << 4) + buf[3];
}
CStringW Byte2_To_Str(BYTE *dest)
{
CStringW str;
str.AppendFormat(L"%X", 0xF & (dest[0] >> 4));
str.AppendFormat(L"%X", 0xF & (dest[0]));
str.AppendFormat(L"%X", 0xF & (dest[1] >> 4));
str.AppendFormat(L"%X", 0xF & (dest[1]));
return str;
}
int main()
{
BYTE dest[2] = { 0 };
StrToByte2(L"1234", dest);
OutputDebugStringW(Byte2_To_Str(dest));
OutputDebugStringW(L"\n");
return 0;
}
If the string is hexadecimal, you can use sscanf to convert each pair of character to bytes.
Basically, "1234" changes to 12 34
"A123" changes to A1 23
bool hexstring_to_bytes(const wchar_t* str, BYTE *dest, int dest_size = 2)
{
int len = wcslen(str);
if((len / 2) > dest_size)
{
//error
return false;
}
for(int i = 0; i < len / 2; i++)
{
int v;
if(swscanf_s(str + i * 2, L"%2x", &v) != 1)
break;
dest[i] = (unsigned char)v;
}
return true;
}
CStringW bytes_to_hexstring(const BYTE* bytes, int byte_size = 2)
{
CString str;
for(int i = 0; i < byte_size; i++)
str.AppendFormat(L"%02X ", bytes[i] & 0xFF);
return str;
}
int main()
{
CStringW str;
CStringW new_string;
BYTE dest[2] = { 0 };
str = L"1234";
hexstring_to_bytes(str, dest);
new_string = bytes_to_hexstring(dest);
OutputDebugString(new_string);
OutputDebugString(L"\n");
str = L"A123";
hexstring_to_bytes(str, dest);
new_string = bytes_to_hexstring(dest);
OutputDebugStringW(new_string);
OutputDebugStringW(L"\n");
return 0;
}
With english characters it is easy to extract, so to say, a char from a string, e.g., the following code should have y as output:
string my_word;
cout << my_word.at(1);
If I try to do the same with greek characters, I get a funny character:
string my_word = "λογος";
cout << my_word.at(1);
Output:
�
My question is: what can I do to make .at() or whatever similar function work?
many thanks!
std::string is a sequence of narrow characters char. But many national alphabets use more then one char to encode single letter when using utf-8 locale. So when you take s.at(0) you get a half of whole letter or even less. You should use wide chars: std::wstring instead of std::string, std::wcout instead of std::cout and L"λογος" as string literal.
Also, you should set right locale before any printing using std::locale stuff.
Code example for this case:
#include <iostream>
#include <string>
#include <locale>
int main(int, char**) {
std::locale::global(std::locale("en_US.utf8"));
std::wcout.imbue(std::locale());
std::wstring s = L"λογος";
std::wcout << s.at(0) << std::endl;
return 0;
}
Problem is complex. Non Latin characters have to be encoded properly. There are couple standards for that. Question is which encoding your system is using.
In UTF-8 encoding one character is represented by multiple bytes. It can vary form 1 to 4 bytes depending on what kind of character it is.
For example: λ is represented by two bytes (in hex): CE BB.
I don't know what are the other character encoding which gives single byte characters fro Greek letters, but I'm sure there is one such encoding.
Note that your value my_word.length() most probably returns 10 not 5.
As others have said, it depends on your encoding. An at() function is problematic once you move to internationalisation because Hebrew has vowels written around the character, for example. Not all scripts consist of discrete sequences of glyphs.
Generally it's best to treat strings as atomic, unless you are writing the display / word manipulation code itself, when of course you need the individual glyphs. To read UTF, check out the code in Baby X (it's a windowing system that has to draw text to the screen)
Here;s the link https://github.com/MalcolmMcLean/babyx/blob/master/src/common/BBX_Font.c
Here's the UTF8 code - it's quite a hunk of code but fundamentally strightforwards.
static const unsigned int offsetsFromUTF8[6] =
{
0x00000000UL, 0x00003080UL, 0x000E2080UL,
0x03C82080UL, 0xFA082080UL, 0x82082080UL
};
static const unsigned char trailingBytesForUTF8[256] = {
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2, 3,3,3,3,3,3,3,3,4,4,4,4,5,5,5,5
};
int bbx_isutf8z(const char *str)
{
int len = 0;
int pos = 0;
int nb;
int i;
int ch;
while(str[len])
len++;
while(pos < len && *str)
{
nb = bbx_utf8_skip(str);
if(nb < 1 || nb > 4)
return 0;
if(pos + nb > len)
return 0;
for(i=1;i<nb;i++)
if( (str[i] & 0xC0) != 0x80 )
return 0;
ch = bbx_utf8_getch(str);
if(ch < 0x80)
{
if(nb != 1)
return 0;
}
else if(ch < 0x8000)
{
if(nb != 2)
return 0;
}
else if(ch < 0x10000)
{
if(nb != 3)
return 0;
}
else if(ch < 0x110000)
{
if(nb != 4)
return 0;
}
pos += nb;
str += nb;
}
return 1;
}
int bbx_utf8_skip(const char *utf8)
{
return trailingBytesForUTF8[(unsigned char) *utf8] + 1;
}
int bbx_utf8_getch(const char *utf8)
{
int ch;
int nb;
nb = trailingBytesForUTF8[(unsigned char)*utf8];
ch = 0;
switch (nb)
{
/* these fall through deliberately */
case 3: ch += (unsigned char)*utf8++; ch <<= 6;
case 2: ch += (unsigned char)*utf8++; ch <<= 6;
case 1: ch += (unsigned char)*utf8++; ch <<= 6;
case 0: ch += (unsigned char)*utf8++;
}
ch -= offsetsFromUTF8[nb];
return ch;
}
int bbx_utf8_putch(char *out, int ch)
{
char *dest = out;
if (ch < 0x80)
{
*dest++ = (char)ch;
}
else if (ch < 0x800)
{
*dest++ = (ch>>6) | 0xC0;
*dest++ = (ch & 0x3F) | 0x80;
}
else if (ch < 0x10000)
{
*dest++ = (ch>>12) | 0xE0;
*dest++ = ((ch>>6) & 0x3F) | 0x80;
*dest++ = (ch & 0x3F) | 0x80;
}
else if (ch < 0x110000)
{
*dest++ = (ch>>18) | 0xF0;
*dest++ = ((ch>>12) & 0x3F) | 0x80;
*dest++ = ((ch>>6) & 0x3F) | 0x80;
*dest++ = (ch & 0x3F) | 0x80;
}
else
return 0;
return dest - out;
}
int bbx_utf8_charwidth(int ch)
{
if (ch < 0x80)
{
return 1;
}
else if (ch < 0x800)
{
return 2;
}
else if (ch < 0x10000)
{
return 3;
}
else if (ch < 0x110000)
{
return 4;
}
else
return 0;
}
int bbx_utf8_Nchars(const char *utf8)
{
int answer = 0;
while(*utf8)
{
utf8 += bbx_utf8_skip(utf8);
answer++;
}
return answer;
}
I need to convert Doublebyte characters. In my special case Shift-Jis into something better to handle, preferably with standard C++.
the following Question ended up without a workaround:
Doublebyte encodings on MSVC (std::codecvt): Lead bytes not recognized
So is there anyone with a suggestion or a reference on how to handle this conversion with C++ standard?
Normally I would recommend using the ICU library, but for this alone, using it is way too much overhead.
First a conversion function which takes an std::string with Shiftjis data, and returns an std::string with UTF8 (note 2019: no idea anymore if it works :))
It uses a uint8_t array of 25088 elements (25088 byte), which is used as convTable in the code. The function does not fill this variable, you have to load it from eg. a file first. The second code part below is a program that can generate the file.
The conversion function doesn't check if the input is valid ShiftJIS data.
std::string sj2utf8(const std::string &input)
{
std::string output(3 * input.length(), ' '); //ShiftJis won't give 4byte UTF8, so max. 3 byte per input char are needed
size_t indexInput = 0, indexOutput = 0;
while(indexInput < input.length())
{
char arraySection = ((uint8_t)input[indexInput]) >> 4;
size_t arrayOffset;
if(arraySection == 0x8) arrayOffset = 0x100; //these are two-byte shiftjis
else if(arraySection == 0x9) arrayOffset = 0x1100;
else if(arraySection == 0xE) arrayOffset = 0x2100;
else arrayOffset = 0; //this is one byte shiftjis
//determining real array offset
if(arrayOffset)
{
arrayOffset += (((uint8_t)input[indexInput]) & 0xf) << 8;
indexInput++;
if(indexInput >= input.length()) break;
}
arrayOffset += (uint8_t)input[indexInput++];
arrayOffset <<= 1;
//unicode number is...
uint16_t unicodeValue = (convTable[arrayOffset] << 8) | convTable[arrayOffset + 1];
//converting to UTF8
if(unicodeValue < 0x80)
{
output[indexOutput++] = unicodeValue;
}
else if(unicodeValue < 0x800)
{
output[indexOutput++] = 0xC0 | (unicodeValue >> 6);
output[indexOutput++] = 0x80 | (unicodeValue & 0x3f);
}
else
{
output[indexOutput++] = 0xE0 | (unicodeValue >> 12);
output[indexOutput++] = 0x80 | ((unicodeValue & 0xfff) >> 6);
output[indexOutput++] = 0x80 | (unicodeValue & 0x3f);
}
}
output.resize(indexOutput); //remove the unnecessary bytes
return output;
}
About the helper file: I used to have a download here, but nowadays I only know unreliable file hosters. So... either http://s000.tinyupload.com/index.php?file_id=95737652978017682303 works for you, or:
First download the "original" data from ftp://ftp.unicode.org/Public/MAPPINGS/OBSOLETE/EASTASIA/JIS/SHIFTJIS.TXT . I can't paste this here because of the length, so we have to hope at least unicode.org stays online.
Then use this program while piping/redirecting above text file in, and redirecting the binary output to a new file. (Needs a binary-safe shell, no idea if it works on Windows).
#include<iostream>
#include<string>
#include<cstdio>
using namespace std;
// pipe SHIFTJIS.txt in and pipe to (binary) file out
int main()
{
string s;
uint8_t *mapping; //same bigendian array as in converting function
mapping = new uint8_t[2*(256 + 3*256*16)];
//initializing with space for invalid value, and then ASCII control chars
for(size_t i = 32; i < 256 + 3*256*16; i++)
{
mapping[2 * i] = 0;
mapping[2 * i + 1] = 0x20;
}
for(size_t i = 0; i < 32; i++)
{
mapping[2 * i] = 0;
mapping[2 * i + 1] = i;
}
while(getline(cin, s)) //pipe the file SHIFTJIS to stdin
{
if(s.substr(0, 2) != "0x") continue; //comment lines
uint16_t shiftJisValue, unicodeValue;
if(2 != sscanf(s.c_str(), "%hx %hx", &shiftJisValue, &unicodeValue)) //getting hex values
{
puts("Error hex reading");
continue;
}
size_t offset; //array offset
if((shiftJisValue >> 8) == 0) offset = 0;
else if((shiftJisValue >> 12) == 0x8) offset = 256;
else if((shiftJisValue >> 12) == 0x9) offset = 256 + 16*256;
else if((shiftJisValue >> 12) == 0xE) offset = 256 + 2*16*256;
else
{
puts("Error input values");
continue;
}
offset = 2 * (offset + (shiftJisValue & 0xfff));
if(mapping[offset] != 0 || mapping[offset + 1] != 0x20)
{
puts("Error mapping not 1:1");
continue;
}
mapping[offset] = unicodeValue >> 8;
mapping[offset + 1] = unicodeValue & 0xff;
}
fwrite(mapping, 1, 2*(256 + 3*256*16), stdout);
delete[] mapping;
return 0;
}
Notes:
Two-byte big endian raw unicode values (more than two byte not necessary here)
First 256 chars (512 byte) for the single byte ShiftJIS chars, value 0x20 for invalid ones.
Then 3 * 256*16 chars for the groups 0x8???, 0x9??? and 0xE???
= 25088 byte
For those looking for the Shift-JIS conversion table data, you can get the uint8_t array here:
https://github.com/bucanero/apollo-ps3/blob/master/include/shiftjis.h
Also, here's a very simple function to convert basic Shift-JIS chars to ASCII:
const char SJIS_REPLACEMENT_TABLE[] =
" ,.,..:;?!\"*'`*^"
"-_????????*---/\\"
"~||--''\"\"()()[]{"
"}<><>[][][]+-+X?"
"-==<><>????*'\"CY"
"$c&%#&*#S*******"
"*******T><^_'='";
//Convert Shift-JIS characters to ASCII equivalent
void sjis2ascii(char* bData)
{
uint16_t ch;
int i, j = 0;
int len = strlen(bData);
for (i = 0; i < len; i += 2)
{
ch = (bData[i]<<8) | bData[i+1];
// 'A' .. 'Z'
// '0' .. '9'
if ((ch >= 0x8260 && ch <= 0x8279) || (ch >= 0x824F && ch <= 0x8258))
{
bData[j++] = (ch & 0xFF) - 0x1F;
continue;
}
// 'a' .. 'z'
if (ch >= 0x8281 && ch <= 0x829A)
{
bData[j++] = (ch & 0xFF) - 0x20;
continue;
}
if (ch >= 0x8140 && ch <= 0x81AC)
{
bData[j++] = SJIS_REPLACEMENT_TABLE[(ch & 0xFF) - 0x40];
continue;
}
if (ch == 0x0000)
{
//End of the string
bData[j] = 0;
return;
}
// Character not found
bData[j++] = bData[i];
bData[j++] = bData[i+1];
}
bData[j] = 0;
return;
}
This is a function in c++ that takes a HEX string and converts it to its equivalent ASCII character.
string HEX2STR (string str)
{
string tmp;
const char *c = str.c_str();
unsigned int x;
while(*c != 0) {
sscanf(c, "%2X", &x);
tmp += x;
c += 2;
}
return tmp;
If you input the following string:
537461636b6f766572666c6f77206973207468652062657374212121
The output will be:
Stackoverflow is the best!!!
Say I were to input 1,000,000 unique HEX strings into this function, it takes awhile to compute.
Is there a more efficient way to complete this?
Of course. Look up two characters at a time:
unsigned char val(char c)
{
if ('0' <= c && c <= '9') { return c - '0'; }
if ('a' <= c && c <= 'f') { return c + 10 - 'a'; }
if ('A' <= c && c <= 'F') { return c + 10 - 'A'; }
throw "Eeek";
}
std::string decode(std::string const & s)
{
if (s.size() % 2) != 0) { throw "Eeek"; }
std::string result;
result.reserve(s.size() / 2);
for (std::size_t i = 0; i < s.size() / 2; ++i)
{
unsigned char n = val(s[2 * i]) * 16 + val(s[2 * i + 1]);
result += n;
}
return result;
}
Just since I wrote it anyway, this should be fairly efficient :)
const char lookup[32] =
{0,10,11,12,13,14,15,0,0,0,0,0,0,0,0,0,0,1,2,3,4,5,6,7,8,9,0,0,0,0,0,0};
std::string HEX2STR(std::string str)
{
std::string out;
out.reserve(str.size()/2);
const char* tmp = str.c_str();
unsigned char ch, last = 1;
while(*tmp)
{
ch <<= 4;
ch |= lookup[*tmp&0x1f];
if(last ^= 1)
out += ch;
tmp++;
}
return out;
}
Don't use sscanf. It's a very general flexible function, which means its slow to allow all those usecases. Instead, walk the string and convert each character yourself, much faster.
This routine takes a string with (what I call) hexwords, often used in embedded ECUs, for example "31 01 7F 33 38 33 37 30 35 31 30 30 20 20 49" and transforms it in readable ASCII where possible.
Transforms by taking care of the discontuinity in the ASCII table (0-9: 48-57, A-F:65 - 70);
int i,j, len=strlen(stringWithHexWords);
char ascii_buffer[250];
char c1, c2, r;
i=0;
j=0;
while (i<len) {
c1 = stringWithHexWords[i];
c2 = stringWithHexWords[i+1];
if ((int)c1!=32) { // if space found, skip next section and bump index only once
// skip scary ASCII codes
if (32<(int)c1 && 127>(int)c1 && 32<(int)c2 && 127>(int)c2) {
//
// transform by taking first hexdigit * 16 and add second hexdigit
// both with correct offset
r = (char) ((16*(int)c1+((int)c2<64?((int)c2-48):((int)c2-55))));
if (31<(int)r && 127>(int)r)
ascii_buffer[j++] = r; // check result for readability
}
i++; // bump index
}
i++; // bump index once more for next hexdigit
}
ascii_bufferCurrentLength = j;
return true;
}
The hexToString() function will convert hex string to ASCII readable string
string hexToString(string str){
std::stringstream HexString;
for(int i=0;i<str.length();i++){
char a = str.at(i++);
char b = str.at(i);
int x = hexCharToInt(a);
int y = hexCharToInt(b);
HexString << (char)((16*x)+y);
}
return HexString.str();
}
int hexCharToInt(char a){
if(a>='0' && a<='9')
return(a-48);
else if(a>='A' && a<='Z')
return(a-55);
else
return(a-87);
}
i tried to convert the digits from a number like 9140 to a char array of bytes, i finally did it, but for some reason one of the numbers is converted wrong.
The idea is separate each digit an convert it in a byte[4] and save it a global array of bytes, that means that array have a digit each 4 positions, i insert each digit at the end of array and finally i insert the amount of digits at the end of the array.
the problem is randomly with some values, for example for the value 25 it works but for 9140 it return me 9040, which could be the problem? this is the code:
void convertCantToByteArray4Digits(unsigned char *bufferDigits,int cant){
//char bufferDigits[32];
int bufferPos=20;
double cantAux=cant;
int digit=0,cantDigits=0;
double subdigit=0;
while(cantAux > 0){
cout<<"VUELTA"<<endl;
cantAux/=10;
cout<<"cantAux/=10:"<<cantAux<<endl;
cout<<"floor"<<floor(cantAux)<<endl;
subdigit=cantAux-floor(cantAux);
cout<<"subdigit"<<subdigit<<endl;
digit=static_cast<int>(subdigit*10);
cout<<"digit:"<<subdigit*10<<endl;
cantAux=cantAux-subdigit;
cout<<"cantAux=cantAux-subdigit:"<<cantAux<<endl;
bufferDigits[bufferPos-4] = (digit >> 24) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-4])<<std::endl;
bufferDigits[bufferPos-3] = (digit >> 16) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-3])<<std::endl;
bufferDigits[bufferPos-2] = (digit >> 8) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-2])<<std::endl;
bufferDigits[bufferPos-1] = (digit) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-1])<<std::endl;
/*bufferDigits[0] = digit >> 24;
std::cout<<bufferDigits[0]<<std::endl;
bufferDigits[1] = digit >> 16;
bufferDigits[2] = digit >> 8;
bufferDigits[3] = digit;*/
bufferPos-=4;
cantDigits++;
}
cout<<"sizeof"<<sizeof(bufferDigits)<<endl;
cout<<"cantDigits"<<cantDigits<<endl;
bufferPos=24;
bufferDigits[bufferPos-4] = (cantDigits) >> 24;
//std::cout<<bufferDigits[bufferPos-4]<<std::endl;
bufferDigits[bufferPos-3] = (cantDigits) >> 16;
bufferDigits[bufferPos-2] = (cantDigits) >> 8;
bufferDigits[bufferPos-1] = (cantDigits);
}
the bufferDigits have a size of 24 bytes, the cant parameter is the number to convert, i receive any question about my code.
I feel this is the most c++ way that probably answers your question, if I understood correctly:
#include <string>
#include <iterator>
#include <iostream>
#include <algorithm>
template <typename It>
It tochars(unsigned int i, It out)
{
It save = out;
do *out++ = '0' + i%10;
while (i/=10);
std::reverse(save, out);
return out;
}
int main()
{
char buf[10];
char* end = tochars(9140, buf);
*end = 0; // null terminate
std::cout << buf << std::endl;
}
Instead of using a double and the floor function, just use an int and the modulus operator instead.
void convertCantToByteArray4Digits(unsigned char *bufferDigits,int cant)
{
int bufferPos=20;
int cantAux=cant;
int digit=0,cantDigits=0;
while(cantAux > 0)
{
cout<<"VUELTA"<<endl;
digit = cantAux % 10;
cout<<"digit:"<<digit<<endl;
cantAux /= 10;
cout<<"cantAux/=10:"<<cantAux<<endl;
bufferDigits[bufferPos-4] = (digit >> 24) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-4])<<std::endl;
bufferDigits[bufferPos-3] = (digit >> 16) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-3])<<std::endl;
bufferDigits[bufferPos-2] = (digit >> 8) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-2])<<std::endl;
bufferDigits[bufferPos-1] = (digit) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-1])<<std::endl;
bufferPos-=4;
cantDigits++;
}
Why not use a union?
union {
int i;
char c[4];
};
i = 2530;
// now c is set appropriately
Or memcpy?
memcpy(bufferDigits, &cant, sizeof(int));
Why so complicated? Just divide and take remainders. Here's a reentrant example to which you provide a buffer, and you get back a pointer to the beginning of the converted string:
char * to_string(unsigned int n, char * buf, unsigned int len)
{
if (len < 1) return buf;
buf[--len] = 0;
if (n == 0 && len > 0) { buf[--len] = '0'; }
while (n != 0 && len > 0) { buf[--len] = '0' + (n % 10); n /= 10; }
return &buf[len];
}
Usage: char buf[100]; char * s = to_string(4160, buf, 100);