Here we have a function fire() which accepts two arguments:
A capital letter (char) in the range of 'A' .. 'A'+BS_GRID_ROWS-1 that indicates the row in your grid to attack.
An integer (int) in the range of 1 .. BS_GRID_COLS that indicates the column of your grid to attack.
The return code will be:
0 if there is only open water.
The bit BS_SHIP_HIT will be set, or both BS_SHIP_HIT and BS_SHIP_SANK will be set. In addition, the ship that was hit will be indicated in the lowest four bits of the return code. You may use BS_SHIP_MASK to help extract the number for the ship type.
semi-pseudocode interpretation:
//r is A ... (A + BS_GRID_ROWS - 1)
//c is 1 ... BS_GRID_COLS
fire(char r, int c) {
//some set of commands
if(miss) {
return 0;
else if(sink) {
return hit + sunk + size;
else if(hit) {
return hit;
else {
return miss;
}
}
I am uncertain of exactly how I might go about extracting these individual values (hit, sunk, size) from the return value.
The actual .h file and it's relevant const values are seen here:
#ifndef BATTLESHIP
#define BATTLESHIP
const int BS_SHIP_HIT = 0x10; // Ship is hit, or
const int BS_SHIP_SANK = 0x20; // sank (must also | BS_SHIP_HIT)
const int BS_CARRIER = 1;
const int BS_BATTLESHIP= 2;
const int BS_CRUISER = 3;
const int BS_DESTROYER = 4;
const int BS_SUBMARINE = 5;
const int BS_SHIP_COUNT = 5;
const int BS_SHIP_MASK = 0x0F;
const int BS_CARRIER_SIZE = 5;
const int BS_BATTLESHIP_SIZE= 4;
const int BS_CRUISER_SIZE = 3;
const int BS_DESTROYER_SIZE = 2;
const int BS_SUBMARINE_SIZE = 3;
const int BS_MODE_NEW_GAME = 1;
const int BS_MODE_CONTINUE_GAME = 2;
const int BS_GRID_ROWS = 10; // letters A to J
const int BS_GRID_COLS = 10; // numbers 1 to 10
const int MaxPlayerCount = 65; // Maximum size for following arrays
extern int userIncoming(char, int);
extern int userBattle(int, int);
extern int incomingStub(char, int);
extern int battleStub(int, int);
extern int (*fire[])(char, int);
extern int (*battleship[])(int, int);
extern char const *playerName[];
#endif
Something like this perhaps?
int result = fire(r, c);
if (result & BS_SHIP_HIT)
{
std::cout << "Ship of size " << result & BS_SHIP_MASK << " hit\n";
}
If the BS_SHIP_HIT bit is set in result, the the result of result & BIT_SHIP_HIT will be equal to BS_SHIP_HIT otherwise the result will be zero (which is equivalent to false).
The result of result & BS_SHIP_MASK will be the low four bits in result.
Or lets look at it using the actual bits:
BS_SHIP_HIT is equal to the binary value 00010000 and BS_SHIT_MASK equal 00001111. Lets assume that fire returns 00010101 (BS_SHIP_HIT set and size 5), then the if condition will be
00010000
& 00010101
----------
= 00010000
Then for the printing, the expression will be
00010101
& 00001111
----------
= 00000101
Related
I'm trying to just copy the contents of a 32-bit unsigned int to be used as float. Not casting it, just re-interpreting the integer bits to be used as float. I'm aware memcpy is the most-suggested option for this. However, when I do memcpy from uint_32 to float, and print out the individual bits, I see they are quite different.
Here is my code snippet:
#include <iostream>
#include <stdint.h>
#include <cstring>
using namespace std;
void print_bits(unsigned n) {
unsigned i;
for(i=1u<<31;i > 0; i/=2)
(n & i) ? printf("1"): printf("0");
}
union {
uint32_t u_int;
float u_float;
} my_union;
int main()
{
uint32_t my_int = 0xc6f05705;
float my_float;
//Method 1 using memcpy
memcpy(&my_float, &my_int, sizeof(my_float));
//Print using function
print_bits(my_int);
printf("\n");
print_bits(my_float);
//Print using printf
printf("\n%0x\n",my_int);
printf("%0x\n",my_float);
//Method 2 using unions
my_union.u_int = 0xc6f05705;
printf("union int = %0x\n",my_union.u_int);
printf("union float = %0x\n",my_union.u_float);
return 0;
}
Outputs:
11000110111100000101011100000101
11111111111111111000011111010101
c6f05705
400865
union int = c6f05705
union float = 40087b
Can someone explain what's happening? I expected the bits to match. Didn't work with a union either.
You need to change the function print_bits to
inline
int is_big_endian(void)
{
const union
{
uint32_t i;
char c[sizeof(uint32_t)];
} e = { 0x01000000 };
return e.c[0];
}
void print_bits( const void *src, unsigned int size )
{
//Check for the order of bytes in memory of the compiler:
int t, c;
if (is_big_endian())
{
t = 0;
c = 1;
}
else
{
t = size - 1;
c = -1;
}
for (; t >= 0 && t <= size - 1; t += c)
{ //print the bits of each byte from the MSB to the LSB
unsigned char i;
unsigned char n = ((unsigned char*)src)[t];
for(i = 1 << (CHAR_BIT - 1); i > 0; i /= 2)
{
printf("%d", (n & i) != 0);
}
}
printf("\n");
}
and call it like this:
int a = 7;
print_bits(&a, sizeof(a));
that way there won't be any type conversion when you call print_bits and it would work for any struct size.
EDIT: I replaced 7 with CHAR_BIT - 1 because the size of byte can be different than 8 bits.
EDIT 2: I added support for both little endian and big endian compilers.
Also as #M.M suggested in the comments if you want to you can use template to make the function call be: print_bits(a) instead of print_bits(&a, sizeof(a))
I have written a program that sets up a client/server TCP socket over which the user sends an integer value to the server through the use of a terminal interface. On the server side I am executing byte commands for which I need hex values stored in my array.
sprint(mychararray, %X, myintvalue);
This code takes my integer and prints it as a hex value into a char array. The only problem is when I use that array to set my commands it registers as an ascii char. So for example if I send an integer equal to 3000 it is converted to 0x0BB8 and then stored as 'B''B''8' which corresponds to 42 42 38 in hex. I have looked all over the place for a solution, and have not been able to come up with one.
Finally came up with a solution to my problem. First I created an array and stored all hex values from 1 - 256 in it.
char m_list[256]; //array defined in class
m_list[0] = 0x00; //set first array index to zero
int count = 1; //count variable to step through the array and set members
while (count < 256)
{
m_list[count] = m_list[count -1] + 0x01; //populate array with hex from 0x00 - 0xFF
count++;
}
Next I created a function that lets me group my hex values into individual bytes and store into the array that will be processing my command.
void parse_input(char hex_array[], int i, char ans_array[])
{
int n = 0;
int j = 0;
int idx = 0;
string hex_values;
while (n < i-1)
{
if (hex_array[n] = '\0')
{
hex_values = '0';
}
else
{
hex_values = hex_array[n];
}
if (hex_array[n+1] = '\0')
{
hex_values += '0';
}
else
{
hex_values += hex_array[n+1];
}
cout<<"This is the string being used in stoi: "<<hex_values; //statement for testing
idx = stoul(hex_values, nullptr, 16);
ans_array[j] = m_list[idx];
n = n + 2;
j++;
}
}
This function will be called right after my previous code.
sprint(mychararray, %X, myintvalue);
void parse_input(arrayA, size of arrayA, arrayB)
Example: arrayA = 8byte char array, and arrayB is a 4byte char array. arrayA should be double the size of arrayB since you are taking two ascii values and making a byte pair. e.g 'A' 'B' = 0xAB
While I was trying to understand your question I realized what you needed was more than a single variable. You needed a class, this is because you wished to have a string that represents the hex code to be printed out and also the number itself in the form of an unsigned 16 bit integer, which I deduced would be something like unsigned short int. So I created a class that did all this for you named hexset (I got the idea from bitset), here:
#include <iostream>
#include <string>
class hexset {
public:
hexset(int num) {
this->hexnum = (unsigned short int) num;
this->hexstring = hexset::to_string(num);
}
unsigned short int get_hexnum() {return this->hexnum;}
std::string get_hexstring() {return this->hexstring;}
private:
static std::string to_string(int decimal) {
int length = int_length(decimal);
std::string ret = "";
for (int i = (length > 1 ? int_length(decimal) - 1 : length); i >= 0; i--) {
ret = hex_arr[decimal%16]+ret;
decimal /= 16;
}
if (ret[0] == '0') {
ret = ret.substr(1,ret.length()-1);
}
return "0x"+ret;
}
static int int_length(int num) {
int ret = 1;
while (num > 10) {
num/=10;
++ret;
}
return ret;
}
static constexpr char hex_arr[16] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
unsigned short int hexnum;
std::string hexstring;
};
constexpr char hexset::hex_arr[16];
int main() {
int number_from_file = 3000; // This number is in all forms technically, hex is just another way to represent this number.
hexset hex(number_from_file);
std::cout << hex.get_hexstring() << ' ' << hex.get_hexnum() << std::endl;
return 0;
}
I assume you'll probably want to do some operator overloading to make it so you can add and subtract from this number or assign new numbers or do any kind of mathematical or bit shift operation.
I am trying to understand what the following classes does, in particular function func. I looked up what each line roughly does. It's definitely manipulating bits by shifting them, but I can't understand the big picture.
template<class T>
class classX
{
public:
classX(int _z) : z(_z){}
size_t operator()(T x) const
{
union { T a; size_t b; } u;
u.b = 0;
u.a = x;
unsigned char rng1 = cvRNG(z*(z+1) + u.b);// cvRNG returns the input if number>0, else return (uint64_t)(int64_t)-1
return (size_t)( cvRandReal(&rng1)*(double)(UINT32_MAX) );// cvRandReal returns random floating-point number between 0 and 1
}
private:
int z;
};
template<class T,class H=classX<T>>
class classY
{
public:
classY(int nb, int nh)
: l_(0),c_(0),arr_(0)
{
b_ = nb;
l_ = nb / 8 + 1;
arr_ = new unsigned char[l_];
for(int i=1; i<=nh; i++)
ff.push_back( H(i) );
}
void func(const T& x)
{
for(size_t j=0; j<ff.size(); j++){
size_t key = ff[j](x) % b_;
arr_[ key / 8 ] |= (unsigned char)(1 << (key % 8));
}
c_++;
}
bool func2(const T& x) const
{
size_t z = 0;
for(size_t j=0; j<ff.size(); j++){
size_t key = ff[j](x) % b_;
z += (arr_[ key / 8 ] & (unsigned char)(1 << (key % 8)) ) > 0 ? 1 : 0;
}
return ( z == ff.size() );
}
private:
unsigned char* arr_;
int l_;
int c_;
size_t b_;
std::vector<H> ff;
};
I am trying to understand what the following classes does, in particular function func. I looked up what each line roughly does. It's definitely manipulating bits by shifting them.
This code build a bitmap hash for the hash set.
// Calc Hash for a class
template<class T>
class classX
{
public:
classX(int _z) : z(_z){} // construct hash
// Method returns a hashcode for x based on seed z.
size_t operator()(T x) const
{
// It is a nice try to read first 4 bytes form object x.
// union share the memory for a & b
union { T a; size_t b; } u;
u.b = 0; // just clean the memory
u.a = x; // since u.a share the memory with u.b, this line init u.b with first 4 bytes of x.
// If x is an instance if class with virtual methods, it will be pointer to vtable (same for all instances of the same calss).
// If x is a struct, it will be fist 4 bytes of x data.
// Most likely x is must be a struct.
// rnd1 is a base seed for the cvRandReal function. Note, u.b not a 0!
unsigned char rng1 = cvRNG(z*(z+1) + u.b);// cvRNG returns the input if number>0, else return (uint64_t)(int64_t)-1
// if rng1 a seed, line below just a hash function
return (size_t)( cvRandReal(&rng1)*(double)(UINT32_MAX) );// cvRandReal returns random floating-point number between 0 and 1
}
private:
int z; // base seed
};
// Bitmap Hash for Hash Set with Objects T, H - hash functions
template<class T,class H=classX<T>>
class classY
{
public:
// nb: size of bitmap hash in bits
// nh: number of hash functions.
// Both this number suppose to reduce probability of hash collision
classY(int nb, int nh)
: l_(0),c_(0),arr_(0)
{
b_ = nb; // size of bitmap hash in bits
l_ = nb / 8 + 1; // size of bitmap hash in bytes
arr_ = new unsigned char[l_]; // bitmap array - hash data
// init hash functions. Start from 1, because 0 seeder is not good.
for(int i=1; i<=nh; i++)
ff.push_back( H(i) );
}
// Add x into the hash bitmap (add x to the set)
void func(const T& x)
{
// for all hash fucntions
for(size_t j=0; j<ff.size(); j++)
{
size_t key = ff[j](x) % b_; // calc hash code and normalize it by number if bits in the map
// key - is a bit number in the bitmap
// Just a rise a key bit in the bitmap
// key / 8 - byte number
// key % 8 - bit number
arr_[ key / 8 ] |= (unsigned char)(1 << (key % 8));
}
c_++; // increase number of object that was processed to build a hash
}
// Check if X into the set (Check if X was added with func before)
// It return False if X wasn't added
// It return True of X probably be added (high probability that X was added, but not 100%)
bool func2(const T& x) const
{
size_t z = 0; // number of passed hash tests
for(size_t j=0; j<ff.size(); j++){
size_t key = ff[j](x) % b_; // calc hash code and normalize it by number if bits in the map, like in func()
// Increment z (number of passed hash tests) if key bit is in the bitmask
z += (arr_[ key / 8 ] & (unsigned char)(1 << (key % 8)) ) > 0 ? 1 : 0;
}
return ( z == ff.size() ); // return true if all tests from hash functions was passed.
}
private:
unsigned char* arr_; // hash bitmap
int l_;// size of bitmap in bytes
int c_;// number of object that was processed to build a hash
size_t b_;// size of bitmap in bits
std::vector<H> ff; // hash functions
};
I am trying to code a Word-RAM version of the subset sum. (It is a basic dp algorithm, and the algo itself should not be important to determine the problem with the code). This is the minimum code needed to reproduce the error I think:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
// get bit #bitno from num. 0 is most significant.
unsigned int getbit(unsigned int num, int bitno){
unsigned int w = sizeof(int)*8; //for regular word.
int shiftno = w-bitno-1;
unsigned int mask = 1<<shiftno;
unsigned int maskedn = num&mask;
unsigned int thebit = maskedn>>shiftno;
return thebit;
}
/* No boundary array right shift */
unsigned int* nbars(unsigned int orig[], unsigned int x){
int alength = sizeof(orig)/sizeof(orig[0]);
unsigned int b_s = sizeof(int)*8;
unsigned int * shifted;
shifted = new unsigned int[alength];
int i;
for(i=0;i<alength;i++){
shifted[i] = 0;
}
unsigned int aux1 = 0;
unsigned int aux2 = 0;
int bts = floor(x/b_s);
int split = x%b_s;
i = bts;
int j = 0;
while(i<alength){
aux1 = orig[j]>>split;
shifted[i] = aux1|aux2;
aux2 = orig[j]<<(b_s-split);
i++;j++;
}
return shifted;
}
/* Returns true if there is a subset of set[] with sum equal to t */
bool isSubsetSum(int set[],int n, int t){
unsigned int w = sizeof(int)*8; //for regular word.
unsigned int wordsneeded = ceil(double(t+1)/w);
unsigned int elements = n;
//Create table
unsigned int table[elements][wordsneeded];
int c,i;
//Initialize first row
for(i=0;i<wordsneeded;i++){
table[0][i] = 0;
}
table[0][0] = 1<<(w-1);
//Fill the table in bottom up manner
int es,ss,ai;
for(c=1;c<=elements; c++){
unsigned int *aux = nbars(table[c-1],set[c-1]);
for(i=0;i<wordsneeded;i++){
table[c][i] = table[c-1][i]|aux[i];
}
}
if((table[elements][wordsneeded-1]>>((w*wordsneeded)-t-1))&1 ==1){
return true;
}return false;
}
int main(){
int set[] = {81,80,43,40,30,26,12,11,9};
//int sum = 63;
int sum = 1000;
int n = sizeof(set)/sizeof(set[0]);
if (isSubsetSum(set,n,sum) == true)
printf("\nFound a subset with given sum\n");
else
printf("\nNo subset with given sum\n");
return 0;
}
Ok. so If I run the example with a target sum of 63, it works just fine. gives the right answer , True, and if I run code to print the subset it prints the right subset. however, if I change the sum to a larger one, say 1000 like in the code, I get the following error:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000400af1 in isSubsetSum (set=0x0, n=0, t=0) at redss.cpp:63
63 unsigned int *aux = nbars(table[c-1],set[c-1]);
from gdb. I really don't understand why it would fail only for larger sums, since the process should be the same... am I missing something obvious? Any help would be great!!!
I'm trying to do simple bit operations on a 'char' variable;
I would like to define 5 constants.
const int a = 0;
const int b = 1;
const int c = 2;
const int d = 3;
const int e = 4;
When I try to set more than one bit of the char, all bits apparently up to the set bit a read as set...here is code I use to set and read bits of the char var:
char var = 0;
var |= c;
var|= d;
BOOL set = false;
if(var & b)
set = true; // reads true
if(var & c)
set = true; // also reads true
if(var & d)
set = true; // also reads true
I read an incomplete thread that says that the operation to set bits may be different for x86...the system I'm using...is that the case here?
You're cutting into your other "bits"' space. Examining a couple gives us:
b = 1 = 0001
c = 2 = 0010
d = 3 = 0011 //uh oh, it's b and c put together (ORed)
To get around this, make each one represent a new bit position:
const int a = 0; //or 0x0
const int b = 1; //or 0x1
const int c = 2; //or 0x2 or 1 << 1
const int d = 4; //or 0x4 or 1 << 2
const int e = 8; //or 0x8 or 1 << 3
You should consider not using 0 if there's a possibility of no bits being set meaning something different, too. The main application for this is to set and check flags, and no flags set definitely shows independence.
Change your definitions to because they way you have defined it some of them has more than one bit set
const int a = 1 << 0;
const int b = 1 << 1;
const int c = 1 << 2;
const int d = 1 << 3;
const int e = 1 << 4;
This way it is evident that each constant only has 1 bit set.
If you want to learn all about the various bit hacks...