Anyway have any idea how to do this?
Let's say i have
char x[] = "ABCD";
and i want to put it into an int, so i'll have
int y = 'ABCD';
I can only put individual chars, such as int y = x[0]; The purpose is to find the decimal representation, but i want the decimal representation of "ABCD" not just "A".
Finally i would use sprintf(dest, "%.2u", value); to get the decimal representation of the char.
EDIT:
I dont understand why, but for "ABCD" this code works
//unrolled bit ops
const char* x = "ABCD";
uint32_t y = 0;
y |= (uint32_t(x[0]) << 24); //MSB
y |= (uint32_t(x[1]) << 16);
y |= (uint32_t(x[2]) << 8);
y |= (uint32_t(x[3]) /*<< 0*/);
however, per instance if i use "(¸þ¶" i dont get the same result.
EDIT2 **:
I've tried your last edit Sam, but it still doesnt work. The value i'm getting is "4294967294" as opposed to "683212470" the correct value.
I also did this
int h1 = '(';
int h2 = '¸';
int h3 = 'þ';
int h4 = '¶';
Output:
40
-72
-2
-74
I googled for the complete ascii table, and i found out that for "þ" the value is "254". I suppose it has something to do with this... i also tried with usigned but no good results.
edit3: If i replace const char *x = "(¸þ¶" with int x[] = {40, 184, 254, 182}; (decimal representation of each character, it works. I can understand where things go wrong, but i have no idea how to fix it.
You need to assure int alignment for that char array for a proper cast or do a memcpy into that int.
Also take care of the integer's endianness! Furthermore, usage of C99 integer types such as uint32_t, will also help to make your code portable.
See this question for how to convert the bits:
strict aliasing and alignment
EDIT:
What R. Martinho Fernandes means, might be this (not tested):
//unrolled bit ops
const char* x = "ABCD";
uint32_t y = 0;
y |= (uint32_t(uint8_t(x[0])) << 24); //MSB
y |= (uint32_t(uint8_t(x[1])) << 16);
y |= (uint32_t(uint8_t(x[2])) << 8);
y |= (uint32_t(uint8_t(x[3])) /*<< 0*/);
Above example avoids specific code for any endianness
EDIT 2:
For dynamic char arrays (assuming leading zero chars if less than 4 have to be converted):
const char* x = "ABC";
size_t nChars = 3;
assert(0 < nChars && nChars <= sizeof(uint32_t));
uint32_t y = 0;
int shift = (nChars*8)-8;
for(size_t i = 0;i < nChars;++i)
{
y |= (uint32_t(uint8_t(x[i])) << shift);
shift -= 8;
}
I have created a sample program if this is what you want.
Include the needed headers (stdio.h, stdlib.h, math.h, string.h)
unsigned long convertToInt(char *x);
void main() {
char x[] = "ABCD";
unsigned long y = 0;
y = convertToInt(x);
printf("Numeric value = %lu\n", y);
}
unsigned long convertToInt(char *x) {
unsigned long num = 0, i, n;`
char hex_c;
for(i = 0; i< strlen(x); i++) {
hex_c = x[i];
if (hex_c >= '0' && hex_c <= '9') {
n = hex_c - '0';
} else if (hex_c >= 'A' && hex_c <= 'F') {
n = 10 + hex_c - 'A';
} else if (hex_c >= 'a' && hex_c <= 'f') {
n = 10 + hex_c - 'a';
} else {
printf("Wrong input");
return 0;
}
num += n * (pow(16, (strlen(x) - i - 1)));
}
return num;
}
Related
How can I convert an unsigned char array that contains letters into an integer. I have tried this so for but it only converts up to four bytes. I also need a way to convert the integer back into the unsigned char array .
int buffToInteger(char * buffer)
{
int a = static_cast<int>(static_cast<unsigned char>(buffer[0]) << 24 |
static_cast<unsigned char>(buffer[1]) << 16 |
static_cast<unsigned char>(buffer[2]) << 8 |
static_cast<unsigned char>(buffer[3]));
return a;
}
It looks like you're trying to use a for loop, i.e. repeating a task over and over again, for an in-determinant amount of steps.
unsigned int buffToInteger(char * buffer, unsigned int size)
{
// assert(size <= sizeof(int));
unsigned int ret = 0;
int shift = 0;
for( int i = size - 1; i >= 0, i-- ) {
ret |= static_cast<unsigned int>(buffer[i]) << shift;
shift += 8;
}
return ret;
}
What I think you are going for is called a hash -- converting an object to a unique integer. The problem is a hash IS NOT REVERSIBLE. This hash will produce different results for hash("WXYZABCD", 8) and hash("ABCD", 4). The answer by #Nicholas Pipitone DOES NOT produce different outputs for these different inputs.
Once you compute this hash, there is no way to get the original string back. If you want to keep knowledge of the original string, you MUST keep the original string as a variable.
int hash(char* buffer, size_t size) {
int res = 0;
for (size_t i = 0; i < size; ++i) {
res += buffer[i];
res *= 31;
}
return res;
}
Here's how to convert the first sizeof(int) bytes of the char array to an int:
int val = *(unsigned int *)buffer;
and to convert in back:
*(unsigned int *)buffer = val;
Note that your buffer must be at least the length of your int type size. You should check for this.
I have a string value:
string str = "2018";
Now I have to store in unsigned char array as hex representation but not really convert to hex:
unsigned char data [2]; //[0x20,0x18]
If I do it this way
data[0] = 0x20;
data[1] = 0x18;
It works, but my input is string, how I can resolve it?
Edit
If my input is unsigned char instead of string like
unsigned char y1 = 20;
unsigned char y2 = 18;
Is there any better way?.
A brief research made me find this function QString::toInt(bool&, int) which can be useful for your intent.
Basically you could:
if(str.size() % 2 == 1){
str = '0' + str;
}
for(int i = 0; i < str.size() / 2; i++){
data[i] = (str[2*i] + str[2*i+1]).toInt(res, 16);
}
I did not try this code, there surely a better way to extract the substring, and probably a more efficient way than to iterate over it.
Perhaps you could try something like this:
#include <iostream>
int main()
{
std::string s = "2018";
unsigned i;
std::sscanf(s.c_str(), "%04x", &i);
unsigned char data[2];
data[0] = i >> 8;
data[1] = i;
std::cout << std::hex << (int)data[0] << " " << (int)data[1] << std::endl;
return 0;
}
https://ideone.com/SyYKUl
Prints:
20 18
If you can assume the string to have 4 digits, you can convert it to BCD format simply and efficiently this way:
void convert_to_bcd4(unsigned char *data, const char *str) {
data[0] = (str[0] - '0') * 16 + (str[1] - '0');
data[1] = (str[2] - '0') * 16 + (str[3] - '0');
}
You can complete the conversion of "2018" to 0x20 0x18 using a hex string to binary converter. I think, for example, sscanf("%x",....) will do this. This typically gives an int. You can extract the byte values from the int in the normal way. (This method does not check for errors.)
I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?
Pseudo code:
int value = ???? // assuming a 32 bit int
int i;
for (i = 0; i < 32; ++i) {
array[i] = (value >> i) & 1;
}
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number,
output_iterator first, output_iterator last)
{
const unsigned number_bits = CHAR_BIT*sizeof(int);
//extract bits one at a time
for(unsigned i=0; i<number_bits && first!=last; ++i) {
const unsigned shift_amount = number_bits-i-1;
const unsigned this_bit = (number>>shift_amount)&1;
*first = this_bit;
++first;
}
//pad the rest with zeros
while(first != last) {
*first = 0;
++first;
}
}
int main() {
int number = 413523152;
int array[32];
convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
for(int i=0; i<32; ++i)
std::cout << array[i] << ' ';
}
Proof of compilation here
You could use C++'s bitset library, as follows.
#include<iostream>
#include<bitset>
int main()
{
int N;//input number in base 10
cin>>N;
int O[32];//The output array
bitset<32> A=N;//A will hold the binary representation of N
for(int i=0,j=31;i<32;i++,j--)
{
//Assigning the bits one by one.
O[i]=A[j];
}
return 0;
}
A couple of points to note here:
First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes.
Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library.
You can print out the bitset variable A as
cout<<A;
and see how it works.
You can do like this:
while (input != 0) {
if (input & 1)
result[index] = 1;
else
result[index] =0;
input >>= 1;// dividing by two
index++;
}
As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:
// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}
Decimal to Binary: Size independent
Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).
First Method:
#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
// conversion
int left = sizeof(int) * CHAR_BIT - 1;
for(i = 0; left >= 0; left--, i++){
bits[i] = !!(dec & ( 1u << left ));
}
return bits;
}
Second Method:
#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);
//mask = 1000 0000 0000 0000
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
//conversion
while(mask > 0){
if((num & mask) == 0 )
bits[i] = 0;
else
bits[i] = 1;
mask = mask >> 1 ; // Right Shift
i++;
}
return bits;
}
I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)
int BinToDec(int Value, int Padding = 8)
{
int Bin = 0;
for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
{
Bin += ((Value >> I - 1) & 1) * Pos;
}
return Bin;
}
This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.
std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
std::vector<int> r;
// make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
while (num_to_convert_to_binary > 0)
{
//cout << " top of loop" << endl;
if (num_to_convert_to_binary % 2 == 0)
r.push_back(0);
else
r.push_back(1);
num_to_convert_to_binary = num_to_convert_to_binary / 2;
}
while(r.size() < num_bits_in_out_vec)
r.push_back(0);
return r;
}
I am making a hashing program that is counting the number of each instance of a word in a text file. This is my count function: I am getting an error when trying to run it.
56 Expression: (unsigned)(c + 1) <= 256
It appears as though it is crashing on the isalpha function when it is reading in the very first nonalpha garbage characters in the text file.
int
count(ifstream & fs,int size)
{
int find(const char *,int, int);
int f,i,l,y;
char ch,*p,s[maxs+1];
for(y = l = i = 0; i < size; i++)
{
table[i].k = 0;
table[i].p = nill;
}
p = s;
while(fs.get(ch))
{
if(isalpha(ch))
{
if(l < maxs)
{
l++;
*p++ = (char)(ch | 0x20);
}
}
else
{
if(l)
{
*p = '\0';
if(f = find(s,size,l) < 0)
{
return(f);
}
y += f;
p = s;
l = 0;
}
}
}
}
It looks to me like isalpha is failing an assertion. Most likely (unsigned)(c + 1) <= 256 is the expression that is being asserted. It looks like this assertion is trying to ensure the value of c falls within [0, 255].
Assuming ch is a signed char and you try to store the value 128 in it, then pass it to isalpha, the left hand side of the assertion is going to evaluate to a very large number, causing it to fail.
128 can't be stored in a signed char, so the value of ch actually becomes -128, which is the signed representation of unsigned 128 (1000 0000 in binary). isalpha is taking ch as an int, so the (c + 1) is actually (-128 + 1), which becomes -127. This value is then cast to an unsigned integer, which turns into a very large value.
A solution is to change ch in your code to an unsigned char, if it's possible that it's value can be greater than 127.
Code Taken From: Bytes to Binary in C Credit: BSchlinker
The following code I modified to take more than 1 Byte at a time. I modified it, and got it half working and then got really confused on my loops. :( Ive spent the last day and a half trying to figure it out... but my C++ skills are not really that good (still learning!)
#include <iostream>
using namespace std;
char show_binary(unsigned char u, unsigned char *result,int len);
int main()
{
unsigned char p40[3] = {0x40, 0x00, 0x0a};
unsigned char bits[8*(sizeof(p40))];
int c;
c=sizeof(p40);
show_binary(*p40, bits, 3);
cout << "\n\n";
cout << "BIN = ";
do{
for (int i = 0; i < 8; i++)
printf("%d",bits[i+(8*c)]);
c++;
}while(c < 3);
cout << "\n";
int a;
cin >> a;
return 0;
}
char show_binary(unsigned char u, unsigned char *result, int len)
{
unsigned char mask = 1;
unsigned char bits[8*sizeof(result)];
int a,b,c;
a=0;
b=0;
c=len;
do{
for (int i = 0; i < 8; i++)
bits[i+(8*a)] = (u[&a] & (mask << i)) != 0;
a++;
}while(a < len);
//Need to reverse it?
do{
for (int i = 8; i != -1; i--)
result[i+(8*c)] = bits[i+(8*c)];
b++;
c--;
}while(b < len);
return *result;
}
After I spit out:
cout << "BIN = ";
do{
for (int i = 0; i < 8; i++)
printf("%d",bits[i+(8*c)]);
c++;
}while(c < 3);
Id like to take bit[11] ~ bit[the end] and compute a BYTE every 8 bits. If that makes sense. But first the function should work. Any pro tips on how this should be done? And of course, rip my code apart. I like to learn.
Man, there is a lot going on in this code, so it's hard to know where to start. Suffice to say, you're trying a bit too hard. It sounds like you are trying to 1) pass in a byte array; 2) turn those bytes into a string representation of the binary; and 3) turn that string representation back into a value?
It just so happens I recently did something similar to this in C, which should still work using a C++ compiler.
#include <stdio.h>
#include <string.h>
/* A macro to get a substring */
#define substr(dest, src, dest_size, startPos, strLen) snprintf(dest, dest_size, "%.*s", strLen, src+startPos)
/* Pass in char* array of bytes, get binary representation as string in bitStr */
void str2bs(const char *bytes, size_t len, char *bitStr) {
size_t i;
char buffer[9] = "";
for(i = 0; i < len; i++) {
sprintf(buffer,
"%c%c%c%c%c%c%c%c",
(bytes[i] & 0x80) ? '1':'0',
(bytes[i] & 0x40) ? '1':'0',
(bytes[i] & 0x20) ? '1':'0',
(bytes[i] & 0x10) ? '1':'0',
(bytes[i] & 0x08) ? '1':'0',
(bytes[i] & 0x04) ? '1':'0',
(bytes[i] & 0x02) ? '1':'0',
(bytes[i] & 0x01) ? '1':'0');
strncat(bitStr, buffer, 8);
buffer[0] = '\0';
}
}
To get the string of binary back into a value it can by done with bit shifting:
unsigned char bs2uc(char *bitStr) {
unsigned char val = 0;
int toShift = 0;
int i;
for(i = strlen(bitStr)-1; i >= 0; i--) {
if(bitStr[i] == '1') {
val = (1 << toShift) | val;
}
toShift++;
}
return val;
}
Once you had a binary string you could then take substrings of any arbitrary 8 bits (or less, I guess) and turn them back into bytes.
char *bitStr; /* Let's pretend this is populated with a valid string */
char byte[9] = "";
substr(byte, bitStr, 9, 4, 8);
/* This would create a substring of length 8 starting from index 4 of bitStr */
unsigned char b = bs2uc(byte);
I've actually created a whole suite of value -> binary string -> value functions if you'd like to take a look at them. GitHub - binstr