Generate distinct entires for a string (dots to be used) - c++

I have the following string:
char *str = "test";
I need to generate distinct entries from it using dots, e.g. the following would be generated:
test
t.est
te.st
tes.t
t.e.s.t
t.e.st
te.s.t
...
Note: cannot have a dot at the start nor at the end.
What I have currently is able to generate some of them, but not all, I tried multiple things, such as:
1. On the bit level (dot on and off for each iteration) which sounded like the most reasonable to date, but I had an obstacle.
2. Just a nested loop which would generate based on equality, e.g. (x, y, and compare x, y with i (where i would serve as the loop for generating the new string).
Current code I have:
#include <stdio.h>
#include <string.h>
int main() {
char str[] = "test";
for (int k = 0; k < sizeof(str) - 1; ++k) {
for (int x = k; x < sizeof(str) - 1; ++x) {
for (int y = x + 1; y < sizeof(str) - 1; ++y) {
char tmp[512], *p = tmp;
for (int i = 0; i < sizeof(str); ++i) {
*p++ = str[i];
if (i == x || i == y)
*p++ = '.';
}
*p++ = '\0';
printf("%s\n", tmp);
}
}
}
return 0;
}
This gives:
t.e.st
t.es.t
t.est.
te.s.t
te.st.
tes.t.
te.s.t
te.st.
tes.t.
tes.t.
Is it best to use the bit-level thing and if so, any suggestions to it? Or is it better if I continue on the current and fix it up to work correctly (please provide solutions)?
Note, that performance isn't really needed here, this is just a one time thing (on startup) so, anything will do as long as it works.

The word has four letters, so there are three breaks where you could insert a dot '.'. There will be 2n-1 combinations of inserting/not inserting dots. You can encode them as binary numbers:
dec bin word
--- --- -------
0 000 test
1 001 t.est
2 010 te.st
3 011 t.e.st
4 100 tes.t
5 101 t.es.t
6 110 te.s.t
7 111 t.e.s.t
What you need to do now is to make a "mask" that changes from 0 to 2n-1-1, inclusive, and interpret this mask as a sequence of dots in a nested loop, like this:
string s = "test";
for (int mask = 0 ; mask != 1 << (s.size()-1) ; mask++) {
cout << s[0];
for (int i = 0 ; i != s.size()-1 ; i++) {
if (mask & (1<<i)) {
cout << ".";
}
cout << s[i+1];
}
cout << endl;
}
Demo.

Use an integer as a bitmask: for each bit, print a . if it is set. If you iterate all values from 0 to 2 ** (len-1) you will enumerate all possible positions for the dot with all possible combinations:
#include <stdio.h>
#include <string.h>
int main(void) {
char str[] = "test";
int len = strlen(str);
for (int bits = 0; bits < (1 << (len - 1)); bits++) {
putchar(str[0]);
for (int j = 1; j < len; j++) {
if (bits & (1 << (j - 1)))
putchar('.');
putchar(str[j]);
}
putchar('\n');
}
return 0;
}

This function should do what you expect:
void dotify(char *str) {
int nr = 1 << (strlen(str)-1);
char buf[strlen(str)*2];
while (nr--) {
int i;
char *ptr = buf;
for (i = 0; i < strlen(str); i++) {
*ptr++ = str[i];
if (nr & (1 << i))
*ptr++ = '.';
}
*ptr = '\0';
puts(buf);
}
}
The fundamental idea behind this solution is to map each dot-place to a digit of a binary number with strlen(str)-1 digits. Count this number from 0-n. Digit 0 means "don't set dot", while 1 means "set dot"

Related

When you have a number which has over 1 digit in a created board, what should you do prevent it from affecting the shape of the board

I am new to C++. And I am trying to implement a 2048 game based on C++ for practice. And I am trying to create a board first.
The problem I have is that when the number is become a two digit numbers it will affect the shape of the wall like this:
Here is the test code:
#include<iostream>
using namespace std;
int main()
{
string gameboard[24][25];
int p = 24 / 4;
for (int i = 0; i < 24;i++)
{
for (int j = 0; j < 25; j++)
{
if (i == 0 || j == 0 || i == 23|| j == 24 || (i % p) == 0 || (j % p) == 0)
{
gameboard[i][j] = '*';
}
else
{
gameboard[i][j] = " ";
}
}
}
gameboard[3][15] = "128";
for (int i = 0; i < 24; ++i)
{
for (int j = 0; j < 25; ++j)
{
cout << gameboard[i][j] << " ";
}
cout << endl;
}
}
So I put a string number "128", it will break the wall. What should I do to prevent this?
It looks like you actually want a char gameboard[24][25]; rather than a 2d array of strings. When each cell of the board is exactly 1 character wide then you just need to print it character by character to get expected output.
If you do that you need to place individual digits rather than the complete number as string:
gameboard[3][13] = '1';
gameboard[3][14] = '2';
gameboard[3][15] = '8';
I recommend to wrap this inside a function:
void place_number(int number, int row, int col,char gameboard[24][25]) {
int x = row * a + b;
int y = col * c + d;
std::string s = std::to_string(number);
for (int i=0; i<s.size(); ++i) {
gameboard[x][y+i] = s[i];
}
}
With coefficients a,b,c and d choosen such that the numbers end up in the right positions.
Doing such formatted printing can become cumbersome rather fast. If you need more sophisticated control I suggest to use a library for that, for example ncurses.
from the 25 places reserved for the width, you should consider to substract the number of placed occupied for the number you print in the cell..
you can turn the number to std::string and take its length
or mathematically get the number of digits using some existing functions like log

Create string of 7-bit ASCII text from 8-bit ASCII chars in C++

I wish to create a string with up to 46 octets, filled in with 7-bit ASCII chars. For example, for the string 'Hello':
I take the last 7 bits of 'H' (0x48 - 100 1000) and put it in the first 7 bits of the first octet.
I take the next char 'e' (0x65 - 110 0101), the first bit will go to the last bit of the first octet then it will fill the next 6 bits of octet 2.
Repeat 1-2 until end of string, then the rest of the octets will be filled in with 1's.
Here is my attempt which I have worked quite a bit on, I've tried using bitset but it seems that it is not appropriate for this task as I do not have to have 46 octets all the time. If the string can fit in 12 (or 24, 36) octets (and just have the rest filled in by 1's) then I do not have to use 46.
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main()
{
std::string a = "Hello";
int N = 0;
if (a.size() <= 11) {
// I'm supposed to implement some logic here to check if it
// will fit 12, 24, 36 or 46 octets but I will do it later.
N = 80;
}
std::vector<bool> temp(N);
int j = 0;
for (int i = 0; i < a.size(); i++) {
std::vector<bool> chartemp(a[i]);
cout << a[i] << "\n";
cout << chartemp[0] << "\n";
cout << chartemp[1] << "\n";
cout << chartemp[2] << "\n";
cout << chartemp[3] << "\n";
cout << chartemp[4] << "\n";
temp[j++] = chartemp[0];
temp[j++] = chartemp[1];
temp[j++] = chartemp[2];
temp[j++] = chartemp[3];
temp[j++] = chartemp[4];
temp[j++] = chartemp[5];
temp[j++] = chartemp[6];
}
for (int k = j; k < N; k++) {
temp[j++] = 1;
}
std::string s = "";
for (int l = 0; l <= temp.size(); l++)
{
if (temp[l]) {
s += '1';
}
else {
s += '0';
}
}
cout << s << "\n";
}
The result is
000000000000000000000000000000000001111111111111111111111111111111111111111111110
It seems as if you expect statement std::vector<bool> chartemp(a[i]) to copy the i'th character of a as a series of bits into the vector. Yet the constructor of a vector interprets the value as the initial size, and a[i] is the ASCII-value of the respective character in a (e.g. 72 for 'H'). So you have a good chance to create vectors of larger size than expected, each position initialized with false.
Instead, I'd suggest to use bit-masking:
temp[j++] = a[i] & (1 << 6);
temp[j++] = a[i] & (1 << 5);
temp[j++] = a[i] & (1 << 4);
temp[j++] = a[i] & (1 << 3);
temp[j++] = a[i] & (1 << 2);
temp[j++] = a[i] & (1 << 1);
temp[j++] = a[i] & (1 << 0);
And instead of using temp[j++], you could use temp.push_back(a[i] & (1 << 0)), thereby also overcoming the need of initializing the vector with the right size.
Try something like this:
#include <string>
#include <vector>
std::string stuffIt(const std::string &str, const int maxOctets)
{
const int maxBits = maxOctets * 8;
const int maxChars = maxBits / 7;
if (str.size() > maxChars)
{
// t0o many chars to stuff into maxOctes!
return "";
}
std::vector<bool> temp(maxBits);
int idx = temp.size()-1;
for (int i = 0; i < str.size(); ++i)
{
char ch = str[i];
for(int j = 0; j < 7; ++j)
temp[idx--] = (ch >> (6-j)) & 1;
}
int numBits = (((7 * str.size()) + 7) & ~7);
for (int i = (temp.size()-numBits-1); i >= 0; --i) {
temp[i] = 1;
}
std::string s;
s.reserve(temp.size());
for(int j = temp.size()-1; j >= 0; --j)
s.push_back(temp[j] ? '1' : '0');
return s;
}
stuffIt("Hello", 12) returns:
100100011001011101100110110011011110000011111111111111111111111111111111111111111111111111111111
stuffIt("Hello", 24) returns:
100100011001011101100110110011011110000011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
stuffIt("Hello", 36) returns:
100100011001011101100110110011011110000011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
stuffIt("Hello", 46) returns:
10010001100101110110011011001101111000001111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
If you want to know how many octets a given string will require (without adding octets full of 1s), you can use this formula:
const int numChars = str.size();
const int numBits = (numChars * 7);
const int bitsNeeded = ((numBits + 7) & ~7);
const int octetsNeeded = (bitsNeeded / 8);
If you want the extra 1s, just round octetsNeeded up to the desired value (for instance, the next even multiple of 12).

Why is this bitset collection algorithm not working?

Here's my goal:
Create all possible bit strings of length N.
Once I've created a possible string, I want to take B bits at time, covert them to a index, and use that index to fetch a character from the following string:
define ALPHABET "abcdefghijklmnopqrstuvwxyz012345"
I want to add each character to a string, then print the string once all bits have been parsed.
Repeat until all possible bit strings are processed.
Here's my solution:
for (unsigned int i = 0; i < pow(2, N); i++) {
// Create bit set.
std::bitset <N> bits(i);
// String to hold characters.
std::string key_val;
// To hold B bits per time.
std::bitset <B> temp;
for (unsigned int j = 0; j < bits.size(); j++) {
// Add to bitset.
temp[j % B] = bits[j];
if (j % B == 0) {
key_val += ALPHABET[temp.to_ulong()];
}
}
std::cout << key_val << std::endl;
key_val.clear();
}
Here's the problem:
The output makes no sense. I can see the program creates really weird sequences, that aren't what I need.
Ideally, the output should be (what I'd like) :
aaaaa
aaaab
aaaac
.
.
.
And here's the output what I'm getting:
aaaaa
baaaa
acaaa
bcaaa
aeaaa
beaaa
agaaa
bgaaa
aiaaa
.
.
.
The "append character" condition triggers immediately (j == 0), this is probably not what you want. You'll also need to take care about the end if bits size is not a multiple of B
for (unsigned int j = 0; j < bits.size(); j++) {
// Add to bitset.
temp[j % B] = bits[j];
if (j % B == B - 1 || j == bits.size() - 1) {
key_val += ALPHABET[temp.to_ulong()];
}
}
Edit: Instead of looping over all bits individually, you can probably do something like this:
for (int j = 0; j < bits.size(); j += B) {
key_val += ALPHABET[bits.to_ulong() % B];
bits >>= B;
}
P.S.: If the bits fit into the loop variable, you don't need a bitset at all.
for (unsigned int i = 0; i < (1 << N); i++) {
std::string key_val;
for (unsigned int j = 0; j < bits.size(); j += B) {
key_val += ALPHABET[(i >> j) % B];
}
std::cout << key_val << std::endl;
}
P.P.S. You may want / need to count down in the inner loop instead if you want the digits reversed

all possible combinations bits

I am working on a program in C++ to demonstrate the workings of coding theory (in the sense of error correction using linear codes). I am adding parity bits to a string of bits ('words'). This is so I can still see what the message used to be if some bits have changed during transmission (Error detection and correction). One important thing to know is the minimum distance between two words. To calculate this I need to compile a list of all possible words and compare them to each other. If my error correction code consists of words of length n=6, there would be 2^6 = 64 possible combinations. My question is about how I can generate all these possible words and store them in an array.
These are two instances of what these words would look like:
0 0 0 0 0 0
1 0 0 0 0 0
1 1 0 1 0 1
I know I can generate combinations of two numbers with an algorithm like this:
for (int i = 1; i <= 5; i++)
for (int j = 2; j <= 5; j++)
if (i != j)
cout << i << "," << j << "," << endl;
However, this code only generates combinations of two numbers and also uses numbers other than 1 or 0.
EDIT
I have created a few for loops that do the job. It is not especially elegant:
int bits[64][6] = { 0 };
for (int x = 0; x < 32; x++)
bits[x][0] = 1;
for (int x = 0; x < 64; x += 2)
bits[x][1] = 1;
for (int x = 0; x < 64; x += 4)
{
bits[x][2] = 1;
bits[x + 1][2] = 1;
}
for (int x = 0; x < 64; x += 8)
{
bits[x][3] = 1;
bits[x + 1][3] = 1;
bits[x + 2][3] = 1;
bits[x + 3][3] = 1;
}
for (int x = 0; x < 64; x += 16)
{
for (int i = 0; i < 8; i++)
bits[x + i][4] = 1;
}
for (int x = 0; x < 64; x += 32)
{
for (int i = 0; i < 16; i++)
bits[x + i][5] = 1;
}
You may use the following: http://ideone.com/C8O8Qe
template <std::size_t N>
bool increase(std::bitset<N>& bs)
{
for (std::size_t i = 0; i != bs.size(); ++i) {
if (bs.flip(i).test(i) == true) {
return true;
}
}
return false; // overflow
}
And then to iterate on all values :
std::bitset<5> bs;
do {
std::cout << bs << std::endl;
} while (increase(bs));
If size is not a compile time value, you may use similar code with std::vector<bool>
I'd use iota or similar:
vector<int> foo(64); // Create a vector to hold 64 entries
iota(foo.begin(), foo.end(), 0); // Inserts the range of numbers in foo [0,foo.size())
for(auto& i : foo){
cout << bitset<6>(i) << endl;
}
I should probably also point out that an int is a sizeof(int) collection of bits, so hopefully you can work with that using bit-wise operators.
If you must use a more literal collection of bits, I would second Jarod42's answer, but still use iota:
vector<bitset<6>> bar(64);
iota(bar.begin(), bar.end(), 0);
for(auto& i : bar){
cout << i << endl;
}
Use a double loop from 0 to 62 and from the first loop index to 63.
Inside the loops convert the two indexes to binary. (A simple way is to convert to hexadecimal and expand the hex digits into four bits.)

for loop: a string passes from "0" to "9"

I want to write a for loop that passes the strings for 0 till 9:
for (string j = "0"; j < "10"; j++) {
}
but it doesn't know the operator ++ (it gets an integer (1) and not a string "1").
I think to write: j+="1", but then j will be "01" and then "011"...
p.s. I don't want to use functions of #include <string> or something else. (stoi, etc)
any help appreciated!
Loop with integers, then manually convert it to a string?
Like
for (int i = 0; i < 10; i++)
{
string j(1, '0' + i); // Only works for single digit numbers
}
Do your iterations with integers, and convert them to strings inside the loop, like this:
// Your loop counter is of type int
for (int i = 0 ; i != 10 ; i++) {
// Use string stream to convert it to string
ostringstream oss;
oss << i;
// si is a string representation of i
string si = oss.str();
cout << si << endl;
}
This works for any kind of integers without limitations.
Here is a short demo.
Not that way, but...
char x[]={'0',0};
for(; x[0] <= '9'; ++x[0])
{
}
Edit: 0..99 version
char x[]={'0','0',0};
int stroffs = 1;
for(; x[0] <= '9'; ++x[0])
{
for(x[1] = '0'; x[1] <= '9'; ++x[1])
{
char * real_str = x + stroffs;
}
stroffs = 0;
}
You can do like j[0]++ to increase first char to next ascii value. but this is only for '0'-'9'
edit: Just an idea, not a perfect code: for 0 to 20;
i = 0;
(j > "9") && (i==0) ? j[i]++ : (++i, j="10");
With your constraints (although I have no idea how you use string without #include <string>):
const char* x[] =
{
"0", "1", .... , "10"
}
for ( int i = 0 ; i <= 10 ; i++ )
{
x[i]....
}
you can use atoi with:
#include <stdlib.h> // or #include <cstdlib>
for (int j = atd::atoi("0"); j < std::atoi("10"); j++) {
}