Cheking a pattern of bits in a sequence - c++

So basically i need to check if a certain sequence of bits occurs in other sequence of bits(32bits).
The function shoud take 3 arguments:
n right most bits of a value.
a value
the sequence where the n bits should be checked for occurance
The function has to return the number of bit where the desired sequence started. Example chek if last 3 bits of 0x5 occur in 0xe1f4.
void bitcheck(unsigned int source, int operand,int n)
{
int i,lastbits,mask;
mask=(1<<n)-1;
lastbits=operand&mask;
for(i=0; i<32; i++)
{
if((source&(lastbits<<i))==(lastbits<<i))
printf("It start at bit number %i\n",i+n);
}
}

Your loop goes too far, I'm afraid. It could, for example 'find' the bit pattern '0001' in a value ~0, which consists of ones only.
This will do better (I hope):
void checkbit(unsigned value, unsigned pattern, unsigned n)
{
unsigned size = 8 * sizeof value;
if( 0 < n && n <= size)
{
unsigned mask = ~0U >> (size - n);
pattern &= mask;
for(int i = 0; i <= size - n; i ++, value >>= 1)
if((value & mask) == pattern)
printf("pattern found at bit position %u\n", i+n);
}
}

I take you to mean that you want to take source as a bit array, and to search it for a bit sequence specified by the n lowest-order bits of operand. It seems you would want to perform a standard mask & compare; the only (minor) complication being that you need to scan. You seem already to have that idea.
I'd write it like this:
void bitcheck(uint32_t source, uint32_t operand, unsigned int n) {
uint32_t mask = ~((~0) << n);
uint32_t needle = operand & mask;
int i;
for(i = 0; i <= (32 - n); i += 1) {
if (((source >> i) & mask) == needle) {
/* found it */
break;
}
}
}
There are some differences in the details between mine and yours, but the main functional difference is the loop bound: you must be careful to ignore cases where some of the bits you compare against the target were introduced by a shift operation, as opposed to originating in source, lest you get false positives. The way I've written the comparison makes it clearer (to me) what the bound should be.
I also use the explicit-width integer data types from stdint.h for all values where the code depends on a specific width. This is an excellent habit to acquire if you want to write code that ports cleanly.

Perhaps:
if((source&(maskbits<<i))==(lastbits<<i))
Because:
finding 10 in 11 will be true for your old code. In fact, your original condition will always return true when 'source' is made of all ones.

Related

How to convert large number strings into integer in c++?

Suppose, I have a long string number input in c++. and we have to do numeric operations on it. We need to convert this into the integer or any possible way to do operations, what are those?
string s="12131313123123213213123213213211312321321321312321213123213213";
Looks like the numbers you want to handle are way to big for any standard integer type, so just "converting" it won't give you a lot. You have two options:
(Highly recommended!) Use a big integer library like e.g. gmp. Such libraries typically also provide functions for parsing and formatting the big numbers.
Implement your big numbers yourself, you could e.g. use an array of uintmax_t to store them. You will have to implement all sorts of arithmetics you'd possibly need yourself, and this isn't exactly an easy task. For parsing the number, you can use a reversed double dabble implementation. As an example, here's some code I wrote a while ago in C, you can probably use it as-is, but you need to provide some helper functions and you might want to rewrite it using C++ facilities like std::string and replacing the struct used here with a std::vector -- it's just here to document the concept
typedef struct hugeint
{
size_t s; // number of used elements in array e
size_t n; // number of total elements in array e
uintmax_t e[];
} hugeint;
hugeint *hugeint_parse(const char *str)
{
char *buf;
// allocate and initialize:
hugeint *result = hugeint_create();
// this is just a helper function copying all numeric characters
// to a freshly allocated buffer:
size_t bcdsize = copyNum(&buf, str);
if (!bcdsize) return result;
size_t scanstart = 0;
size_t n = 0;
size_t i;
uintmax_t mask = 1;
for (i = 0; i < bcdsize; ++i) buf[i] -= '0';
while (scanstart < bcdsize)
{
if (buf[bcdsize - 1] & 1) result->e[n] |= mask;
mask <<= 1;
if (!mask)
{
mask = 1;
// this function increases the storage size of the flexible array member:
if (++n == result->n) result = hugeint_scale(result, result->n + 1);
}
for (i = bcdsize - 1; i > scanstart; --i)
{
buf[i] >>= 1;
if (buf[i-1] & 1) buf[i] |= 8;
}
buf[scanstart] >>= 1;
while (scanstart < bcdsize && !buf[scanstart]) ++scanstart;
for (i = scanstart; i < bcdsize; ++i)
{
if (buf[i] > 7) buf[i] -= 3;
}
}
free(buf);
return result;
}
Your best best would be to use a large numbers computational library.
One of the best out there is the GNU Multiple Precision Arithmetic Library
Example of a useful function to solve your problem::
Function: int mpz_set_str (mpz_t rop, const char *str, int base)
Set the value of rop from str, a null-terminated C string in base
base. White space is allowed in the string, and is simply ignored.
The base may vary from 2 to 62, or if base is 0, then the leading
characters are used: 0x and 0X for hexadecimal, 0b and 0B for binary,
0 for octal, or decimal otherwise.
For bases up to 36, case is ignored; upper-case and lower-case letters
have the same value. For bases 37 to 62, upper-case letter represent
the usual 10..35 while lower-case letter represent 36..61.
This function returns 0 if the entire string is a valid number in base
base. Otherwise it returns -1.
Documentation: https://gmplib.org/manual/Assigning-Integers.html#Assigning-Integers
If string contains number which is less than std::numeric_limits<uint64_t>::max(), then std::stoull() is the best opinion.
unsigned long long = std::stoull(s);
C++11 and later.

Bits aren't being reset?

I am using Bit Scan Forward to detect set bits within a unit64_t, use each set bit index within my program, clear the set bit and then proceed to find the next set bit. However, when the initial uint64_t value is:
0000000000001000000000000000000000000000000000000000000000000000
The below code isn't resetting the 52nd bit, therefore it gets stuck in the while loop:
uint64_t bits64 = data;
//Detects index being 52
int32_t index = __builtin_ffsll(bits64);
while(0 != index){
//My logic
//Set bit isn't being cleared here
clearNthBitOf64(bits64, index);
//Still picks-up bit 52 as being set
index = __builtin_ffsll(bits64);
}
void clearNthBitOf64(uint64_t& input, const uint32_t n) {
input &= ~(1 << n);
}
From the docs:
— Built-in Function: int __builtin_ffs (int x)
Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
You're simply off by one on your clear function, it should be:
clearNthBitOf64(bits64, index-1);
Also your clear function is overflowing. You need to ensure that what you're left shifting is of sufficient size:
void clearNthBitOf64(uint64_t& input, const uint32_t n) {
input &= ~(1ULL << n);
// ^^^
}
__builtin_ffsll "returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero." You need to adjust the left shift to ~(1ULL << (n - 1)) or change the function call to clearNthBitOf64(bits64, index - 1);

Shortest way to calculate difference between two numbers?

I'm about to do this in C++ but I have had to do it in several languages, it's a fairly common and simple problem, and this is the last time. I've had enough of coding it as I do, I'm sure there must be a better method, so I'm posting here before I write out the same long winded method in yet another language;
Consider the (lilies!) following code;
// I want the difference between these two values as a positive integer
int x = 7
int y = 3
int diff;
// This means you have to find the largest number first
// before making the subtract, to keep the answer positive
if (x>y) {
diff = (x-y);
} else if (y>x) {
diff = (y-x);
} else if (x==y) {
diff = 0;
}
This may sound petty but that seems like a lot to me, just to get the difference between two numbers. Is this in fact a completely reasonable way of doing things and I'm being unnecessarily pedantic, or is my spidey sense tingling with good reason?
Just get the absolute value of the difference:
#include <cstdlib>
int diff = std::abs(x-y);
Using the std::abs() function is one clear way to do this, as others here have suggested.
But perhaps you are interested in succinctly writing this function without library calls.
In that case
diff = x > y ? x - y : y - x;
is a short way.
In your comments, you suggested that you are interested in speed. In that case, you may be interested in ways of performing this operation that do not require branching. This link describes some.
#include <cstdlib>
int main()
{
int x = 7;
int y = 3;
int diff = std::abs(x-y);
}
All the existing answers will overflow on extreme inputs, giving undefined behaviour. #craq pointed this out in a comment.
If you know that your values will fall within a narrow range, it may be fine to do as the other answers suggest, but to handle extreme inputs (i.e. to robustly handle any possible input values), you cannot simply subtract the values then apply the std::abs function. As craq rightly pointed out, the subtraction may overflow, causing undefined behaviour (consider INT_MIN - 1), and the std::abs call may also cause undefined behaviour (consider std::abs(INT_MIN)). It's no better to determine the min and max of the pair and to then perform the subtraction.
More generally, a signed int is unable to represent the maximum difference between two signed int values. The unsigned int type should be used for the output value.
I see 3 solutions. I've used the explicitly-sized integer types from stdint.h here, to close the door on uncertainties like whether long and int are the same size and range.
Solution 1. The low-level way.
// I'm unsure if it matters whether our target platform uses 2's complement,
// due to the way signed-to-unsigned conversions are defined in C and C++:
// > the value is converted by repeatedly adding or subtracting
// > one more than the maximum value that can be represented
// > in the new type until the value is in the range of the new type
uint32_t difference_int32(int32_t i, int32_t j) {
static_assert(
(-(int64_t)INT32_MIN) == (int64_t)INT32_MAX + 1,
"Unexpected numerical limits. This code assumes two's complement."
);
// Map the signed values across to the number-line of uint32_t.
// Preserves the greater-than relation, such that an input of INT32_MIN
// is mapped to 0, and an input of 0 is mapped to near the middle
// of the uint32_t number-line.
// Leverages the wrap-around behaviour of unsigned integer types.
// It would be more intuitive to set the offset to (uint32_t)(-1 * INT32_MIN)
// but that multiplication overflows the signed integer type,
// causing undefined behaviour. We get the right effect subtracting from zero.
const uint32_t offset = (uint32_t)0 - (uint32_t)(INT32_MIN);
const uint32_t i_u = (uint32_t)i + offset;
const uint32_t j_u = (uint32_t)j + offset;
const uint32_t ret = (i_u > j_u) ? (i_u - j_u) : (j_u - i_u);
return ret;
}
I tried a variation on this using bit-twiddling cleverness taken from https://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax but modern code-generators seem to generate worse code with this variation. (I've removed the static_assert and the comments.)
uint32_t difference_int32(int32_t i, int32_t j) {
const uint32_t offset = (uint32_t)0 - (uint32_t)(INT32_MIN);
const uint32_t i_u = (uint32_t)i + offset;
const uint32_t j_u = (uint32_t)j + offset;
// Surprisingly it helps code-gen in MSVC 2019 to manually factor-out
// the common subexpression. (Even with optimisation /O2)
const uint32_t t = (i_u ^ j_u) & -(i_u < j_u);
const uint32_t min = j_u ^ t; // min(i_u, j_u)
const uint32_t max = i_u ^ t; // max(i_u, j_u)
const uint32_t ret = max - min;
return ret;
}
Solution 2. The easy way. Avoid overflow by doing the work using a wider signed integer type. This approach can't be used if the input signed integer type is the largest signed integer type available.
uint32_t difference_int32(int32_t i, int32_t j) {
return (uint32_t)std::abs((int64_t)i - (int64_t)j);
}
Solution 3. The laborious way. Use flow-control to work through the different cases. Likely to be less efficient.
uint32_t difference_int32(int32_t i, int32_t j)
{ // This static assert should pass even on 1's complement.
// It's just about impossible that int32_t could ever be capable of representing
// *more* values than can uint32_t.
// Recall that in 2's complement it's the same number, but in 1's complement,
// uint32_t can represent one more value than can int32_t.
static_assert( // Must use int64_t to subtract negative number from INT32_MAX
((int64_t)INT32_MAX - (int64_t)INT32_MIN) <= (int64_t)UINT32_MAX,
"Unexpected numerical limits. Unable to represent greatest possible difference."
);
uint32_t ret;
if (i == j) {
ret = 0;
} else {
if (j > i) { // Swap them so that i > j
const int32_t i_orig = i;
i = j;
j = i_orig;
} // We may now safely assume i > j
uint32_t magnitude_of_greater; // The magnitude, i.e. abs()
bool greater_is_negative; // Zero is of course non-negative
uint32_t magnitude_of_lesser;
bool lesser_is_negative;
if (i >= 0) {
magnitude_of_greater = i;
greater_is_negative = false;
} else { // Here we know 'lesser' is also negative, but we'll keep it simple
// magnitude_of_greater = -i; // DANGEROUS, overflows if i == INT32_MIN.
magnitude_of_greater = (uint32_t)0 - (uint32_t)i;
greater_is_negative = true;
}
if (j >= 0) {
magnitude_of_lesser = j;
lesser_is_negative = false;
} else {
// magnitude_of_lesser = -j; // DANGEROUS, overflows if i == INT32_MIN.
magnitude_of_lesser = (uint32_t)0 - (uint32_t)j;
lesser_is_negative = true;
}
// Finally compute the difference between lesser and greater
if (!greater_is_negative && !lesser_is_negative) {
ret = magnitude_of_greater - magnitude_of_lesser;
} else if (greater_is_negative && lesser_is_negative) {
ret = magnitude_of_lesser - magnitude_of_greater;
} else { // One negative, one non-negative. Difference is sum of the magnitudes.
// This will never overflow.
ret = magnitude_of_lesser + magnitude_of_greater;
}
}
return ret;
}
Well it depends on what you mean by shortest. The fastet runtime, the fastest compilation, the least amount of lines, the least amount of memory. I'll assume you mean runtime.
#include <algorithm> // std::max/min
int diff = std::max(x,y)-std::min(x,y);
This does two comparisons and one operation (this one is unavoidable but could be optimized through certain bitwise operations with specific cases, compiler might actually do this for you though). Also if the compiler is smart enough it could do only one comparison and save the result for the other comparison. E.g if X>Y then you know from the first comparison that Y < X but I'm not sure if compilers take advantage of this.

How to convert large integers to base 2^32?

First off, I'm doing this for myself so please don't suggest "use GMP / xint / bignum" (if it even applies).
I'm looking for a way to convert large integers (say, OVER 9000 digits) into a int32 array of 232 representations. The numbers will start out as base 10 strings.
For example, if I wanted to convert string a = "4294967300" (in base 10), which is just over INT_MAX, to the new base 232 array, it would be int32_t b[] = {1,5}. If int32_t b[] = {3,2485738}, the base 10 number would be 3 * 2^32 + 2485738. Obviously the numbers I'll be working with are beyond the range of even int64 so I can't exactly turn the string into an integer and mod my way to success.
I have a function that does subtraction in base 10. Right now I'm thinking I'll just do subtraction(char* number, "2^32") and count how many times before I get a negative number, but that will probably take a long time for larger numbers.
Can someone suggest a different method of conversion? Thanks.
EDIT
Sorry in case you didn't see the tag, I'm working in C++
Assuming your bignum class already has multiplication and addition, it's fairly simple:
bignum str_to_big(char* str) {
bignum result(0);
while (*str) {
result *= 10;
result += (*str - '0');
str = str + 1;
}
return result;
}
Converting the other way is the same concept, but requires division and modulo
std::string big_to_str(bignum num) {
std::string result;
do {
result.push_back(num%10);
num /= 10;
} while(num > 0);
std::reverse(result.begin(), result.end());
return result;
}
Both of these are for unsigned only.
To convert from base 10 strings to your numbering system, starting with zero continue adding and multiplying each base 10 digit by 10. Every time you have a carry add a new digit to your base 2^32 array.
The simplest (not the most efficient) way to do this is to write two functions, one to multiply a large number by an int, and one to add an int to a large number. If you ignore the complexities introduced by signed numbers, the code looks something like this:
(EDITED to use vector for clarity and to add code for actual question)
void mulbig(vector<uint32_t> &bignum, uint16_t multiplicand)
{
uint32_t carry=0;
for( unsigned i=0; i<bignum.size(); i++ ) {
uint64_t r=((uint64_t)bignum[i] * multiplicand) + carry;
bignum[i]=(uint32_t)(r&0xffffffff);
carry=(uint32_t)(r>>32);
}
if( carry )
bignum.push_back(carry);
}
void addbig(vector<uint32_t> &bignum, uint16_t addend)
{
uint32_t carry=addend;
for( unsigned i=0; carry && i<bignum.size(); i++ ) {
uint64_t r=(uint64_t)bignum[i] + carry;
bignum[i]=(uint32_t)(r&0xffffffff);
carry=(uint32_t)(r>>32);
}
if( carry )
bignum.push_back(carry);
}
Then, implementing atobignum() using those functions is trivial:
void atobignum(const char *str,vector<uint32_t> &bignum)
{
bignum.clear();
bignum.push_back(0);
while( *str ) {
mulbig(bignum,10);
addbig(bignum,*str-'0');
++str;
}
}
I think Docjar: gnu/java/math/MPN.java might contain what you're looking for, specifically the code for public static int set_str (int dest[], byte[] str, int str_len, int base).
Start by converting the number to binary. Starting from the right, each group of 32 bits is a single base2^32 digit.

Nibble shifting

I was working on an encryption algorithm and I wonder how I can change the following code into something simpler and how to reverse this code.
typedef struct
{
unsigned low : 4;
unsigned high : 4;
} nibles;
static void crypt_enc(char *data, int size)
{
char last = 0;
//...
// Pass 2
for (i = 0; i < size; i++)
{
nibles *n = (nibles *)&data[i];
n->low = last;
last = n->high;
n->high = n->low;
}
((nibles *)&data[0])->low = last;
}
data is the input and the output for this code.
You are setting both nibbles of every byte to the same thing, because you set the high nibble to the same as the low nibble in the end. I'll assume this is a bug and that your intention was to shift all the nibbles in the data, carrying from one byte to the other, and rolling around. Id est, ABCDEF (nibbles order from low to high) would become FABCDE. Please correct me if I got that wrong.
The code should be something like:
static void crypt_enc(char *data, int size)
{
char last = 0;
//...
// Pass 2
for (i = 0; i < size; i++)
{
nibles *n = (nibles *)&data[i];
unsigned char old_low = n->low;
n->low = last;
last = n->high;
n->high = old_low;
}
((nibles *)&data[0])->low = last;
}
Is everything okay now? No. The cast to nibbles* is only well-defined if the alignment of nibbles is not stricter than the alignment of char. And that is not guaranteed (however, with a small change, GCC generates a type with the same alignment).
Personally, I'd avoid this issue altogether. Here's how I'd do it:
void set_low_nibble(char& c, unsigned char nibble) {
// assumes nibble has no bits set in the four higher bits)
unsigned char& b = reinterpret_cast<unsigned char&>(c);
b = (b & 0xF0) | nibble;
}
void set_high_nibble(char& c, unsigned char nibble) {
unsigned char& b = reinterpret_cast<unsigned char&>(c);
b = (b & 0x0F) | (nibble << 4);
}
unsigned char get_low_nibble(unsigned char c) {
return c & 0x0F;
}
unsigned char get_high_nibble(unsigned char c) {
return (c & 0xF0) >> 4;
}
static void crypt_enc(char *data, int size)
{
char last;
//...
// Pass 2
for (i = 0; i < size; ++i)
{
unsigned char old_low = get_low_nibble(data[i]);
set_low_nibble(data[i], last);
last = get_high_nibble(data[i]);
set_high_nibble(data[i], old_low);
}
set_low_nibble(data[0], last);
}
Doing the reverse amounts to changing "low" to "high" and vice-versa; rolling to the last nibble, not the first; and going through the data in the opposite direction:
for (i = size-1; i >= 0; --i)
{
unsigned char old_high = get_high_nibble(data[i]);
set_high_nibble(data[i], last);
last = get_low_nibble(data[i]);
set_low_nibble(data[i], old_high);
}
set_high_nibble(data[size-1], last);
If you want you can get rid of all the transfers to the temporary last. You just need to save the last nibble of all, and then shift the nibbles directly without the use of another variable:
last = get_high_nibble(data[size-1]);
for (i = size-1; i > 0; --i) // the last one needs special care
{
set_high_nibble(data[i], get_low_nibble(data[i]));
set_low_nibble(data[i], get_high_nibble(data[i-1]));
}
set_high_nibble(data[0], get_low_nibble(data[0]));
set_low_nibble(data[0], last);
It looks like you're just shifting each nibble one place and then taking the low nibble of the last byte and moving it to the beginning. Just do the reverse to decrypt (start at the end of data, move to the beginning)
As you are using bit fields, it is very unlikely that there will be a shift style method to move nibbles around. If this shifting is important to you, then I recommend you consider storing them in an unsigned integer of some sort. In that form, bit operations can be performed effectively.
Kevin's answer is right in what you are attempting to do. However, you've made an elementary mistake. The end result is that your whole array is filled with zeros instead of rotating nibbles.
To see why that is the case, I'd suggest you first implement a byte rotation ({a, b, c} -> {c, a, b}) the same way - which is by using a loop counter increasing from 0 to array size. See if you can do better by reducing transfers into the variable last.
Once you see how you can do that, you can simply apply the same logic to nibbles ({al:ah, bl:bh, cl:ch} -> {ch:al, ah:bl, bh:cl}). My representation here is incorrect if you think in terms of hex values. The hex value 0xXY is Y:X in my notation. If you think about how you've done the byte rotation, you can figure out how to save only one nibble, and simply transfer nibbles without actually moving them into last.
Reversing the code is impossible as the algorithm nukes the first byte entirely and discards the lower half of the rest.
On the first iteration of the for loop, the lower part of the first byte is set to zero.
n->low = last;
It's never saved off anywhere. It's simply gone.
// I think this is what you were trying for
last = ((nibbles *)&data[0])->low;
for (i = 0; i < size-1; i++)
{
nibbles *n = (nibbles *)&data[i];
nibbles *next = (nibbles *)&data[i+1];
n->low = n->high;
n->high = next->low;
}
((nibbles *)&data[size-1])->high = last;
To reverse it:
last = ((nibbles *)&data[size-1])->high;
for (i = size-1; i > 0; i--)
{
nibbles *n = (nibbles *)&data[i];
nibbles *prev = (nibbles *)&data[i-1];
n->high = n->low;
n->low = prev->high;
}
((nibbles *)&data[0])->low = last;
... unless I got high and low backwards.
But anyway, this is NOWHERE near the field of encryption. This is obfuscation at best. Security through obscurity is a terrible terrible practice and home-brew encryption get's people in trouble. If you're playing around, all the more power to you. But if you actually want something to be secure, please for the love of all your bytes use a well known and secure encryption scheme.