I am making a hashing program that is counting the number of each instance of a word in a text file. This is my count function: I am getting an error when trying to run it.
56 Expression: (unsigned)(c + 1) <= 256
It appears as though it is crashing on the isalpha function when it is reading in the very first nonalpha garbage characters in the text file.
int
count(ifstream & fs,int size)
{
int find(const char *,int, int);
int f,i,l,y;
char ch,*p,s[maxs+1];
for(y = l = i = 0; i < size; i++)
{
table[i].k = 0;
table[i].p = nill;
}
p = s;
while(fs.get(ch))
{
if(isalpha(ch))
{
if(l < maxs)
{
l++;
*p++ = (char)(ch | 0x20);
}
}
else
{
if(l)
{
*p = '\0';
if(f = find(s,size,l) < 0)
{
return(f);
}
y += f;
p = s;
l = 0;
}
}
}
}
It looks to me like isalpha is failing an assertion. Most likely (unsigned)(c + 1) <= 256 is the expression that is being asserted. It looks like this assertion is trying to ensure the value of c falls within [0, 255].
Assuming ch is a signed char and you try to store the value 128 in it, then pass it to isalpha, the left hand side of the assertion is going to evaluate to a very large number, causing it to fail.
128 can't be stored in a signed char, so the value of ch actually becomes -128, which is the signed representation of unsigned 128 (1000 0000 in binary). isalpha is taking ch as an int, so the (c + 1) is actually (-128 + 1), which becomes -127. This value is then cast to an unsigned integer, which turns into a very large value.
A solution is to change ch in your code to an unsigned char, if it's possible that it's value can be greater than 127.
Related
I have written a program that sets up a client/server TCP socket over which the user sends an integer value to the server through the use of a terminal interface. On the server side I am executing byte commands for which I need hex values stored in my array.
sprint(mychararray, %X, myintvalue);
This code takes my integer and prints it as a hex value into a char array. The only problem is when I use that array to set my commands it registers as an ascii char. So for example if I send an integer equal to 3000 it is converted to 0x0BB8 and then stored as 'B''B''8' which corresponds to 42 42 38 in hex. I have looked all over the place for a solution, and have not been able to come up with one.
Finally came up with a solution to my problem. First I created an array and stored all hex values from 1 - 256 in it.
char m_list[256]; //array defined in class
m_list[0] = 0x00; //set first array index to zero
int count = 1; //count variable to step through the array and set members
while (count < 256)
{
m_list[count] = m_list[count -1] + 0x01; //populate array with hex from 0x00 - 0xFF
count++;
}
Next I created a function that lets me group my hex values into individual bytes and store into the array that will be processing my command.
void parse_input(char hex_array[], int i, char ans_array[])
{
int n = 0;
int j = 0;
int idx = 0;
string hex_values;
while (n < i-1)
{
if (hex_array[n] = '\0')
{
hex_values = '0';
}
else
{
hex_values = hex_array[n];
}
if (hex_array[n+1] = '\0')
{
hex_values += '0';
}
else
{
hex_values += hex_array[n+1];
}
cout<<"This is the string being used in stoi: "<<hex_values; //statement for testing
idx = stoul(hex_values, nullptr, 16);
ans_array[j] = m_list[idx];
n = n + 2;
j++;
}
}
This function will be called right after my previous code.
sprint(mychararray, %X, myintvalue);
void parse_input(arrayA, size of arrayA, arrayB)
Example: arrayA = 8byte char array, and arrayB is a 4byte char array. arrayA should be double the size of arrayB since you are taking two ascii values and making a byte pair. e.g 'A' 'B' = 0xAB
While I was trying to understand your question I realized what you needed was more than a single variable. You needed a class, this is because you wished to have a string that represents the hex code to be printed out and also the number itself in the form of an unsigned 16 bit integer, which I deduced would be something like unsigned short int. So I created a class that did all this for you named hexset (I got the idea from bitset), here:
#include <iostream>
#include <string>
class hexset {
public:
hexset(int num) {
this->hexnum = (unsigned short int) num;
this->hexstring = hexset::to_string(num);
}
unsigned short int get_hexnum() {return this->hexnum;}
std::string get_hexstring() {return this->hexstring;}
private:
static std::string to_string(int decimal) {
int length = int_length(decimal);
std::string ret = "";
for (int i = (length > 1 ? int_length(decimal) - 1 : length); i >= 0; i--) {
ret = hex_arr[decimal%16]+ret;
decimal /= 16;
}
if (ret[0] == '0') {
ret = ret.substr(1,ret.length()-1);
}
return "0x"+ret;
}
static int int_length(int num) {
int ret = 1;
while (num > 10) {
num/=10;
++ret;
}
return ret;
}
static constexpr char hex_arr[16] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
unsigned short int hexnum;
std::string hexstring;
};
constexpr char hexset::hex_arr[16];
int main() {
int number_from_file = 3000; // This number is in all forms technically, hex is just another way to represent this number.
hexset hex(number_from_file);
std::cout << hex.get_hexstring() << ' ' << hex.get_hexnum() << std::endl;
return 0;
}
I assume you'll probably want to do some operator overloading to make it so you can add and subtract from this number or assign new numbers or do any kind of mathematical or bit shift operation.
I am writing a program that takes in numbers entered by a user and stores them into an array. The program will then convert the values to decimal and store in a new array. I am having trouble converting the binary value into the correct bit size.
For example a user enters 3 and 4.
My program stores and converts them to binary resulting in 11 and 100. How can I get it to store 011 and 100?
I believe I'll need to convert to a char array or a string of some sort but I have no idea what steps I should follow.
I think what you’re trying to do is this:
void Convert(int Number, char *Array, int Bits) {
int Bit;
for (Bit = 0; Bit < Bits; Bit++) {
if ((Number & (1 << (Bits - (Bit + 1)))) > 0) {
Array[Bit] = '1';
} else {
Array[Bit] = '0';
}
}
}
I did almost same thing as given below:
You can try this.
//Decimal to Binary
char* dTb(int num, unsigned bit)
{
char *binStr = new char (bit + 1);
int len = bit;
binStr[bit] = '\0';
while (bit--) binStr[bit] = '0';
if (num == 0)
return binStr;
int r;
while (num && len)
{
r = num % 2;
binStr[--len] = r + '0';
num /= 2;
}
return binStr;
}
Thanks !!!
I write this code for show fibonacci series using recursion.But It not show correctly for n>43 (ex: for n=100 show:-980107325).
#include<stdio.h>
#include<conio.h>
void fibonacciSeries(int);
void fibonacciSeries(int n)
{
static long d = 0, e = 1;
long c;
if (n>1)
{
c = d + e;
d = e;
e = c;
printf("%d \n", c);
fibonacciSeries(n - 1);
}
}
int main()
{
long a, n;
long long i = 0, j = 1, f;
printf("How many number you want to print in the fibonnaci series :\n");
scanf("%d", &n);
printf("\nFibonacci Series: ");
printf("%d", 0);
fibonacciSeries(n);
_getch();
return 0;
}
The value of fib(100) is so large that it will overflow even a 64 bit number. To operate on such large values, you need to do arbitrary-precision arithmetic. Arbitrary-precision arithmetic is not provided by C nor C++ standard libraries, so you'll need to either implement it yourself or use a library written by someone else.
For smaller values that do fit your long long, your problem is that you use the wrong printf format specifier. To print a long long, you need to use %lld.
Code overflows the range of the integer used long.
Could use long long, but even that may not handle Fib(100) which needs at least 69 bits.
Code could use long double if 1.0/LDBL_EPSILON > 3.6e20
Various libraries exist to handle very large integers.
For this task, all that is needed is a way to add two large integers. Consider using a string. An inefficient but simply string addition follows. No contingencies for buffer overflow.
#include <stdio.h>
#include <string.h>
#include <assert.h>
char *str_revese_inplace(char *s) {
char *left = s;
char *right = s + strlen(s);
while (right > left) {
right--;
char t = *right;
*right = *left;
*left = t;
left++;
}
return s;
}
char *str_add(char *ssum, const char *sa, const char *sb) {
const char *pa = sa + strlen(sa);
const char *pb = sb + strlen(sb);
char *psum = ssum;
int carry = 0;
while (pa > sa || pb > sb || carry) {
int sum = carry;
if (pa > sa) sum += *(--pa) - '0';
if (pb > sb) sum += *(--pb) - '0';
*psum++ = sum % 10 + '0';
carry = sum / 10;
}
*psum = '\0';
return str_revese_inplace(ssum);
}
int main(void) {
char fib[3][300];
strcpy(fib[0], "0");
strcpy(fib[1], "1");
int i;
for (i = 2; i <= 1000; i++) {
printf("Fib(%3d) %s.\n", i, str_add(fib[2], fib[1], fib[0]));
strcpy(fib[0], fib[1]);
strcpy(fib[1], fib[2]);
}
return 0;
}
Output
Fib( 2) 1.
Fib( 3) 2.
Fib( 4) 3.
Fib( 5) 5.
Fib( 6) 8.
...
Fib(100) 3542248xxxxxxxxxx5075. // Some xx left in for a bit of mystery.
Fib(1000) --> 43466...about 200 more digits...8875
You can print some large Fibonacci numbers using only char, int and <stdio.h> in C.
There is some headers :
#include <stdio.h>
#define B_SIZE 10000 // max number of digits
typedef int positive_number;
struct buffer {
size_t index;
char data[B_SIZE];
};
Also some functions :
void init_buffer(struct buffer *buffer, positive_number n) {
for (buffer->index = B_SIZE; n; buffer->data[--buffer->index] = (char) (n % 10), n /= 10);
}
void print_buffer(const struct buffer *buffer) {
for (size_t i = buffer->index; i < B_SIZE; ++i) putchar('0' + buffer->data[i]);
}
void fly_add_buffer(struct buffer *buffer, const struct buffer *client) {
positive_number a = 0;
size_t i = (B_SIZE - 1);
for (; i >= client->index; --i) {
buffer->data[i] = (char) (buffer->data[i] + client->data[i] + a);
buffer->data[i] = (char) (buffer->data[i] - (a = buffer->data[i] > 9) * 10);
}
for (; a; buffer->data[i] = (char) (buffer->data[i] + a), a = buffer->data[i] > 9, buffer->data[i] = (char) (buffer->data[i] - a * 10), --i);
if (++i < buffer->index) buffer->index = i;
}
Example usage :
int main() {
struct buffer number_1, number_2, number_3;
init_buffer(&number_1, 0);
init_buffer(&number_2, 1);
for (int i = 0; i < 2500; ++i) {
number_3 = number_1;
fly_add_buffer(&number_1, &number_2);
number_2 = number_3;
}
print_buffer(&number_1);
}
// print 131709051675194962952276308712 ... 935714056959634778700594751875
Best C type is still char ? The given code is printing f(2500), a 523 digits number.
Info : f(2e5) has 41,798 digits, see also Factorial(10000) and Fibonacci(1000000).
Well, you could want to try implementing BigInt in C++ or C.
Useful Material:
How to implement big int in C++
For this purporse you need implement BigInteger. There is no such build-in support in current c++. You can view few advises on stack overflow
Or you also can use some libs like GMP
Also here is some implementation:
E-maxx - on Russian language description.
Or find some open implementation on GitHub
Try to use a different format and printf, use unsigned to get wider range of digits.
If you use unsigned long long you should get until 18 446 744 073 709 551 615 so until the 93th number for fibonacci serie 12200160415121876738 but after this one you will get incorrect result because the 94th number 19740274219868223167 is too big for unsigned long long.
Keep in mind that the n-th fibonacci number is (approximately) ((1 + sqrt(5))/2)^n.
This allows you to get the value for n that allows the result to fit in 32 /64 unsigned integers. For signed remember that you lose one bit.
Anyway have any idea how to do this?
Let's say i have
char x[] = "ABCD";
and i want to put it into an int, so i'll have
int y = 'ABCD';
I can only put individual chars, such as int y = x[0]; The purpose is to find the decimal representation, but i want the decimal representation of "ABCD" not just "A".
Finally i would use sprintf(dest, "%.2u", value); to get the decimal representation of the char.
EDIT:
I dont understand why, but for "ABCD" this code works
//unrolled bit ops
const char* x = "ABCD";
uint32_t y = 0;
y |= (uint32_t(x[0]) << 24); //MSB
y |= (uint32_t(x[1]) << 16);
y |= (uint32_t(x[2]) << 8);
y |= (uint32_t(x[3]) /*<< 0*/);
however, per instance if i use "(¸þ¶" i dont get the same result.
EDIT2 **:
I've tried your last edit Sam, but it still doesnt work. The value i'm getting is "4294967294" as opposed to "683212470" the correct value.
I also did this
int h1 = '(';
int h2 = '¸';
int h3 = 'þ';
int h4 = '¶';
Output:
40
-72
-2
-74
I googled for the complete ascii table, and i found out that for "þ" the value is "254". I suppose it has something to do with this... i also tried with usigned but no good results.
edit3: If i replace const char *x = "(¸þ¶" with int x[] = {40, 184, 254, 182}; (decimal representation of each character, it works. I can understand where things go wrong, but i have no idea how to fix it.
You need to assure int alignment for that char array for a proper cast or do a memcpy into that int.
Also take care of the integer's endianness! Furthermore, usage of C99 integer types such as uint32_t, will also help to make your code portable.
See this question for how to convert the bits:
strict aliasing and alignment
EDIT:
What R. Martinho Fernandes means, might be this (not tested):
//unrolled bit ops
const char* x = "ABCD";
uint32_t y = 0;
y |= (uint32_t(uint8_t(x[0])) << 24); //MSB
y |= (uint32_t(uint8_t(x[1])) << 16);
y |= (uint32_t(uint8_t(x[2])) << 8);
y |= (uint32_t(uint8_t(x[3])) /*<< 0*/);
Above example avoids specific code for any endianness
EDIT 2:
For dynamic char arrays (assuming leading zero chars if less than 4 have to be converted):
const char* x = "ABC";
size_t nChars = 3;
assert(0 < nChars && nChars <= sizeof(uint32_t));
uint32_t y = 0;
int shift = (nChars*8)-8;
for(size_t i = 0;i < nChars;++i)
{
y |= (uint32_t(uint8_t(x[i])) << shift);
shift -= 8;
}
I have created a sample program if this is what you want.
Include the needed headers (stdio.h, stdlib.h, math.h, string.h)
unsigned long convertToInt(char *x);
void main() {
char x[] = "ABCD";
unsigned long y = 0;
y = convertToInt(x);
printf("Numeric value = %lu\n", y);
}
unsigned long convertToInt(char *x) {
unsigned long num = 0, i, n;`
char hex_c;
for(i = 0; i< strlen(x); i++) {
hex_c = x[i];
if (hex_c >= '0' && hex_c <= '9') {
n = hex_c - '0';
} else if (hex_c >= 'A' && hex_c <= 'F') {
n = 10 + hex_c - 'A';
} else if (hex_c >= 'a' && hex_c <= 'f') {
n = 10 + hex_c - 'a';
} else {
printf("Wrong input");
return 0;
}
num += n * (pow(16, (strlen(x) - i - 1)));
}
return num;
}
How can I tell if a binary number is negative?
Currently I have the code below. It works fine converting to Binary. When converting to decimal, I need to know if the left most bit is 1 to tell if it is negative or not but I cannot seem to figure out how to do that.
Also, instead of making my Bin2 function print 1's an 0's, how can I make it return an integer? I didn't want to store it in a string and then convert to int.
EDIT: I'm using 8 bit numbers.
int Bin2(int value, int Padding = 8)
{
for (int I = Padding; I > 0; --I)
{
if (value & (1 << (I - 1)))
std::cout<< '1';
else
std::cout<<'0';
}
return 0;
}
int Dec2(int Value)
{
//bool Negative = (Value & 10000000);
int Dec = 0;
for (int I = 0; Value > 0; ++I)
{
if(Value % 10 == 1)
{
Dec += (1 << I);
}
Value /= 10;
}
//if (Negative) (Dec -= (1 << 8));
return Dec;
}
int main()
{
Bin2(25);
std::cout<<"\n\n";
std::cout<<Dec2(11001);
}
You are checking for negative value incorrectly. Do the following instead:
bool Negative = (value & 0x80000000); //It will work for 32-bit platforms only
Or may be just compare it with 0.
bool Negative = (value < 0);
Why don't you just compare it to 0. Should work fine and almost certainly you can't do this in a manner more efficient than the compiler.
I am entirely unclear if this is what the OP is looking for, but its worth a toss:
If you know you have a value in a signed int that is supposed to be representing a signed 8-bit value, you can pull it apart, store it in a signed 8-bit value, then promote it back to a native int signed value like this:
#include <stdio.h>
int main(void)
{
// signed integer, value is 245. 8bit signed value is (-11)
int num = 0xF5;
// pull out the low 8 bits, storing them in a signed char.
signed char ch = (signed char)(num & 0xFF);
// now let the signed char promote to a signed int.
int res = ch;
// finally print both.
printf("%d ==> %d\n",num, res);
// do it again for an 8 bit positive value
// this time with just direct casts.
num = 0x70;
printf("%d ==> %d\n", num, (int)((signed char)(num & 0xFF)));
return 0;
}
Output
245 ==> -11
112 ==> 112
Is that what you're trying to do? In short, the code above will take the 8bits sitting at the bottom of num, treat them as a signed 8-bit value, then promote them to a signed native int. The result is you can now "know" not only whether the 8-bits were a negative number (since res will be negative if they were), you also get the 8-bit signed number as a native int in the process.
On the other hand, if all you care about is whether the 8th bit is set in the input int, and is supposed to denote a negative value state, then why not just :
int IsEightBitNegative(int val)
{
return (val & 0x80) != 0;
}