I am working on a program in C++ to demonstrate the workings of coding theory (in the sense of error correction using linear codes). I am adding parity bits to a string of bits ('words'). This is so I can still see what the message used to be if some bits have changed during transmission (Error detection and correction). One important thing to know is the minimum distance between two words. To calculate this I need to compile a list of all possible words and compare them to each other. If my error correction code consists of words of length n=6, there would be 2^6 = 64 possible combinations. My question is about how I can generate all these possible words and store them in an array.
These are two instances of what these words would look like:
0 0 0 0 0 0
1 0 0 0 0 0
1 1 0 1 0 1
I know I can generate combinations of two numbers with an algorithm like this:
for (int i = 1; i <= 5; i++)
for (int j = 2; j <= 5; j++)
if (i != j)
cout << i << "," << j << "," << endl;
However, this code only generates combinations of two numbers and also uses numbers other than 1 or 0.
EDIT
I have created a few for loops that do the job. It is not especially elegant:
int bits[64][6] = { 0 };
for (int x = 0; x < 32; x++)
bits[x][0] = 1;
for (int x = 0; x < 64; x += 2)
bits[x][1] = 1;
for (int x = 0; x < 64; x += 4)
{
bits[x][2] = 1;
bits[x + 1][2] = 1;
}
for (int x = 0; x < 64; x += 8)
{
bits[x][3] = 1;
bits[x + 1][3] = 1;
bits[x + 2][3] = 1;
bits[x + 3][3] = 1;
}
for (int x = 0; x < 64; x += 16)
{
for (int i = 0; i < 8; i++)
bits[x + i][4] = 1;
}
for (int x = 0; x < 64; x += 32)
{
for (int i = 0; i < 16; i++)
bits[x + i][5] = 1;
}
You may use the following: http://ideone.com/C8O8Qe
template <std::size_t N>
bool increase(std::bitset<N>& bs)
{
for (std::size_t i = 0; i != bs.size(); ++i) {
if (bs.flip(i).test(i) == true) {
return true;
}
}
return false; // overflow
}
And then to iterate on all values :
std::bitset<5> bs;
do {
std::cout << bs << std::endl;
} while (increase(bs));
If size is not a compile time value, you may use similar code with std::vector<bool>
I'd use iota or similar:
vector<int> foo(64); // Create a vector to hold 64 entries
iota(foo.begin(), foo.end(), 0); // Inserts the range of numbers in foo [0,foo.size())
for(auto& i : foo){
cout << bitset<6>(i) << endl;
}
I should probably also point out that an int is a sizeof(int) collection of bits, so hopefully you can work with that using bit-wise operators.
If you must use a more literal collection of bits, I would second Jarod42's answer, but still use iota:
vector<bitset<6>> bar(64);
iota(bar.begin(), bar.end(), 0);
for(auto& i : bar){
cout << i << endl;
}
Use a double loop from 0 to 62 and from the first loop index to 63.
Inside the loops convert the two indexes to binary. (A simple way is to convert to hexadecimal and expand the hex digits into four bits.)
Related
Can someone help me.
for example I input
for x RandomName for y and
it needs to count each y letter in x word, and output should be.
a: 2
n: 1
d: 1
void counter(string x, string y)
{
int signs[100];
int amount = 0;
for (int i = 0; i < y.length(); i++)
{
signs[i] = y[i];
for (int j = 0; j < x.length(); j++)
{
if (x[i] == y[i])
{
amount++;
}
}
cout << y[i] << ":" << amount << endl;
}
}
There are several errors in you code:
You always compare x[i] to y[i] ignoring the j index completely, which means you will never count a letter that is not in the same place in both strings.
You have a signs array you assign values to, but never use it. Also, it has the size 100, but for what?
You never reset the amount variable after the internal loop, which means it will not count letter individually.
You have the right idea - two loops, one iterates over all letters in y the other over all letters in x.
Fix the indexes in the comparison, reset the amount to 0 after you print it, and get rid of signs, and you code should work.
You incrise amount every time you find a lettre that match the input, but the amount is the same for every letter taken in parameters. To fix it just create an array of amount for every letter in parameters.
#define max(a, b) ((a) > (b) ? (a) : (b))
#define pass ((void)0)
std::string x = "RandomName";
std::string y = "and";
int maxSize = max(x.size(), y.size());
char* letterList = (char*)_alloca(maxSize);
int used = 0;
for (int l1 = 0; l1 < y.size(); l1++) {
char letter = y.at(l1);
int count = 0;
for (int i = 0; i < maxSize; i++) {
if (letter == letterList[i]) {
continue;//Do not know if that is the right spelling
}
}
letterList[used] = letter;
used++;
for (int l2 = 0; l2 < x.size(); l2++) {
if (letter == x.at(l2)) { count++; }
}
std::cout << letter << " = " << count << std::endl;
}
Should do what you want
Here's my goal:
Create all possible bit strings of length N.
Once I've created a possible string, I want to take B bits at time, covert them to a index, and use that index to fetch a character from the following string:
define ALPHABET "abcdefghijklmnopqrstuvwxyz012345"
I want to add each character to a string, then print the string once all bits have been parsed.
Repeat until all possible bit strings are processed.
Here's my solution:
for (unsigned int i = 0; i < pow(2, N); i++) {
// Create bit set.
std::bitset <N> bits(i);
// String to hold characters.
std::string key_val;
// To hold B bits per time.
std::bitset <B> temp;
for (unsigned int j = 0; j < bits.size(); j++) {
// Add to bitset.
temp[j % B] = bits[j];
if (j % B == 0) {
key_val += ALPHABET[temp.to_ulong()];
}
}
std::cout << key_val << std::endl;
key_val.clear();
}
Here's the problem:
The output makes no sense. I can see the program creates really weird sequences, that aren't what I need.
Ideally, the output should be (what I'd like) :
aaaaa
aaaab
aaaac
.
.
.
And here's the output what I'm getting:
aaaaa
baaaa
acaaa
bcaaa
aeaaa
beaaa
agaaa
bgaaa
aiaaa
.
.
.
The "append character" condition triggers immediately (j == 0), this is probably not what you want. You'll also need to take care about the end if bits size is not a multiple of B
for (unsigned int j = 0; j < bits.size(); j++) {
// Add to bitset.
temp[j % B] = bits[j];
if (j % B == B - 1 || j == bits.size() - 1) {
key_val += ALPHABET[temp.to_ulong()];
}
}
Edit: Instead of looping over all bits individually, you can probably do something like this:
for (int j = 0; j < bits.size(); j += B) {
key_val += ALPHABET[bits.to_ulong() % B];
bits >>= B;
}
P.S.: If the bits fit into the loop variable, you don't need a bitset at all.
for (unsigned int i = 0; i < (1 << N); i++) {
std::string key_val;
for (unsigned int j = 0; j < bits.size(); j += B) {
key_val += ALPHABET[(i >> j) % B];
}
std::cout << key_val << std::endl;
}
P.P.S. You may want / need to count down in the inner loop instead if you want the digits reversed
I have a problem with made a function in c++ that will give me back binary number.
function(unsigned short int user_input, int tab[16]) {
for(ii = 0; ??; ii++)
tab[ii] = i % 2;
i = i / 2;
}
User insert DEC number and get back BIN.
Should i just type ii < 16 ? It's working, but is it correct?
That'll work but is a bit wasteful (if you enter 1, you divide 0 by 2 15 times). Also division by 2 can be "sped up" by shifting. Here's an alternative:
function(unsigned short int user_input, int tab[16]) {
int idx = 0;
while(user_input > 0)
{
tab[idx] = user_input & 1;
user_input = user_input >> 1;
idx++;
}
for(; idx < 16; idx++)
{
tab[idx] = 0;
}
}
A somewhat more compact version:
void function(uint16_t x, int b[16]) {
for (int i = 0; i != 16; ++i, x >>= 1)
b[i] = x & 1;
}
Note that this puts the least significant bit in b[0].
To answer your question: Yes, ii < 16 is correct. This causes the loop to iterate 16 times, with ii going from 0 to 15 on each execution of the loop body, during which you check the last bit then shift.
I have the following string:
char *str = "test";
I need to generate distinct entries from it using dots, e.g. the following would be generated:
test
t.est
te.st
tes.t
t.e.s.t
t.e.st
te.s.t
...
Note: cannot have a dot at the start nor at the end.
What I have currently is able to generate some of them, but not all, I tried multiple things, such as:
1. On the bit level (dot on and off for each iteration) which sounded like the most reasonable to date, but I had an obstacle.
2. Just a nested loop which would generate based on equality, e.g. (x, y, and compare x, y with i (where i would serve as the loop for generating the new string).
Current code I have:
#include <stdio.h>
#include <string.h>
int main() {
char str[] = "test";
for (int k = 0; k < sizeof(str) - 1; ++k) {
for (int x = k; x < sizeof(str) - 1; ++x) {
for (int y = x + 1; y < sizeof(str) - 1; ++y) {
char tmp[512], *p = tmp;
for (int i = 0; i < sizeof(str); ++i) {
*p++ = str[i];
if (i == x || i == y)
*p++ = '.';
}
*p++ = '\0';
printf("%s\n", tmp);
}
}
}
return 0;
}
This gives:
t.e.st
t.es.t
t.est.
te.s.t
te.st.
tes.t.
te.s.t
te.st.
tes.t.
tes.t.
Is it best to use the bit-level thing and if so, any suggestions to it? Or is it better if I continue on the current and fix it up to work correctly (please provide solutions)?
Note, that performance isn't really needed here, this is just a one time thing (on startup) so, anything will do as long as it works.
The word has four letters, so there are three breaks where you could insert a dot '.'. There will be 2n-1 combinations of inserting/not inserting dots. You can encode them as binary numbers:
dec bin word
--- --- -------
0 000 test
1 001 t.est
2 010 te.st
3 011 t.e.st
4 100 tes.t
5 101 t.es.t
6 110 te.s.t
7 111 t.e.s.t
What you need to do now is to make a "mask" that changes from 0 to 2n-1-1, inclusive, and interpret this mask as a sequence of dots in a nested loop, like this:
string s = "test";
for (int mask = 0 ; mask != 1 << (s.size()-1) ; mask++) {
cout << s[0];
for (int i = 0 ; i != s.size()-1 ; i++) {
if (mask & (1<<i)) {
cout << ".";
}
cout << s[i+1];
}
cout << endl;
}
Demo.
Use an integer as a bitmask: for each bit, print a . if it is set. If you iterate all values from 0 to 2 ** (len-1) you will enumerate all possible positions for the dot with all possible combinations:
#include <stdio.h>
#include <string.h>
int main(void) {
char str[] = "test";
int len = strlen(str);
for (int bits = 0; bits < (1 << (len - 1)); bits++) {
putchar(str[0]);
for (int j = 1; j < len; j++) {
if (bits & (1 << (j - 1)))
putchar('.');
putchar(str[j]);
}
putchar('\n');
}
return 0;
}
This function should do what you expect:
void dotify(char *str) {
int nr = 1 << (strlen(str)-1);
char buf[strlen(str)*2];
while (nr--) {
int i;
char *ptr = buf;
for (i = 0; i < strlen(str); i++) {
*ptr++ = str[i];
if (nr & (1 << i))
*ptr++ = '.';
}
*ptr = '\0';
puts(buf);
}
}
The fundamental idea behind this solution is to map each dot-place to a digit of a binary number with strlen(str)-1 digits. Count this number from 0-n. Digit 0 means "don't set dot", while 1 means "set dot"
I have a very long vector as below in Eigen:
MatrixXi iIndex(1000000);
which is initialized into zeros, with only a short continuous part(less than 100) filled with 1 's, the position of which is randomized.
I need to do the following things in a long loop:
int count;
int position;
for(int i=0;i<99999;i++){
//... randomly fill iIndex will short `1` s
// e.g. index = (someVectorXi.array() == i).cast<int>();
count = iIndex.sum(); // count all the nonzero elements
//position the first nonzero element index:
for (int j=0; j<1000000;j++){
if(iIndex(j))
position = j;
}
}
But it is really very slow.
Is there any way to expedite?
my 2 cents: group the bits in e.g. uint32_t, so you can check whether an i32 differs from 0. When it is different, then you may take longer to find out which bits are 1.
Assuming the number of bits is a multitude of 32 (making it easier):
for (int i = 0; i < max / sizeof(uint32_t); ++i)
{
if (wordGrp[i] != 0)
{
uint32_t grp = wordGrp[i];
for (j = 0; j < BITS_PER_UINT32; j++)
{
if ((grp & 1) == 1) std::cout << "idx " << (i*32 + j) << " is 1\n";
grp >>= 1;
}
}
}