Given a set of byte-representable symbols (e.g. characters, short strings, etc), is there a way to find a mapping from that set to a set of consecutive natural numbers that includes 0? For example, suppose there is the following unordered set of characters (not necessarily in any particular character set).
'a', '(', '🍌'
Is there a way to find a "hash" function of sorts that would map each symbol (e.g. by means of its byte representation) uniquely to one of the integers 0, 1, and 2, in any order? For example, 'a'=0, '('=1, '🍌'=2 is just as valid as 'a'=2, '('=0, '🍌'=1.
Why?
Because I am developing something for a memory-constrained (think on the order of kiB) embedded target that has a lot of fixed reverse-lookup tables, so something like std::unordered_map would be out of the question. The ETL equivalent etl::unordered_map would be getting there, but there's quite a bit of size overhead, and collisions can happen, so lookup timings could differ. A sparse lookup table would work, where the byte representation of the symbol would be the index, but that would be a lot of wasted space, and there are many different tables.
There's also the chance that the "hash" function may end up costing more than the above alternatives, but my curiosity is a strong driving force. Also, although both C and C++ are tagged, this question is specific to neither of them. I just happen to be using C/C++.
The normal way to do things like this, for example when coding a font for a custom display, is to map everything to a sorted, read-only look-up table array with indices 0 to 127 or 0 to 255. Where symbols corresponding to the old ASCII table are mapped to their respective index. And other things like your banana symbol could be mapped beyond index 127.
So when you use FONT [97] or FONT ['a'], you end up with the symbol corresponding to 'a'. That way you can translate from ASCII strings to your custom table, or from your source editor font to the custom table.
Using any other data type such as a hash table sounds like muddy program design to me. Embedded systems should by their nature be deterministic, so overly complex data structures don't make sense most of the time. If you for some reason unknown must have the data unordered, then you should describe the reason why in detail, or otherwise you are surely asking an "XY question".
Yes, there is such a map. Just put all of them in an array of strings... then sort it, and make a function that searchs for the word in the array and returns the index in the array.
static char *strings[] = {
"word1", "word2", "hello", "world", NULL, /* to end the array */
};
int word_number(const char *word)
{
for(int i = 0; strings[i] != NULL; i++) {
if (strcmp(strings[i], word) == 0)
return i;
}
return -1; /* not found */
}
The cost of this (in space terms) is very low (considering that the compiler assigning pointers can optimice string allocation based on common suffixes (making a string overlap others if it is a common suffix of them) and if you give the compiler an already sorted array of literals, you can use bsearch() algorithm (which is O(log(n)) of the number of elements in the table)
static char *strings[] = { /* this time sorted */
"Hello",
"rella", /* this and the next are merged into positions on the same literal below
* This can be controlled with a compiler option. */
"umbrella",
"world"
};
const int strings_len = 4;
int string_cmp(const void *_s1, const void *_s2)
{
const char *s1 = _s1, *s2 = _s2;
return strcmp(s1, s2);
}
int word_number(const char *word)
{
char *result = bsearch(strings, 4, sizeof *strings, string_cmp);
return result ? result - strings : -1;
}
If you want a function that gives you a number for any string, and maps biyectively that string with that number... It's even easier. First start with zero. For each byte in the string, just multiply your number by 256 (the number of byte values) and add the next byte to the result, then return back that result once you have done this operation with every char in the string. You will get a different number for each possible string, covering all possible strings and all possible numbers. But I think this is not what you want.
super_long_integer char2number(const unsigned char *s)
{
super_long_integer result = 0;
int c;
while ((c = *s++) != 0) {
result *= 256;
result += c;
}
return result;
}
But that integer must be capable of supporting numbers in the range [0...256^(maximum lenght of accepted string)] which is a very large number.
Related
EDIT: I've made the main change of using iterators to keep track of successive positions in the bit and character strings and pass the latter by const ref. Now, when I copy the sample inputs onto themselves multiple times to test the clock, everything finishes within 10 seconds for really long bit and character strings and even up to 50 lines of sample input. But, still when I submit, CodeEval says the process was aborted after 10 seconds. As I mention, they don't share their input so now that "extensions" of the sample input work, I'm not sure how to proceed. Any thoughts on an additional improvement to increase my recursive performance would be greatly appreciated.
NOTE: Memoization was a good suggestion but I could not figure out how to implement it in this case since I'm not sure how to store the bit-to-char correlation in a static look-up table. The only thing I thought of was to convert the bit values to their corresponding integer but that risks integer overflow for long bit strings and seems like it would take too long to compute. Further suggestions for memoization here would be greatly appreciated as well.
This is actually one of the moderate CodeEval challenges. They don't share the sample input or output for moderate challenges but the output "fail error" simply says "aborted after 10 seconds," so my code is getting hung up somewhere.
The assignment is simple enough. You take a filepath as the single command-line argument. Each line of the file will contain a sequence of 0s and 1s and a sequence of As and Bs, separated by a white space. You are to determine whether the binary sequence can be transformed into the letter sequence according to the following two rules:
1) Each 0 can be converted to any non-empty sequence of As (e.g, 'A', 'AA', 'AAA', etc.)
2) Each 1 can be converted to any non-empty sequences of As OR Bs (e.g., 'A', 'AA', etc., or 'B', 'BB', etc) (but not a mixture of the letters)
The constraints are to process up to 50 lines from the file and that the length of the binary sequence is in [1,150] and that of the letter sequence is in [1,1000].
The most obvious starting algorithm is to do this recursively. What I came up with was for each bit, collapse the entire next allowed group of characters first, test the shortened bit and character strings. If it fails, add back one character from the killed character group at a time and call again.
Here is my complete code. I removed cmd-line argument error checking for brevity.
#include <iostream>
#include <fstream>
#include <string>
#include <iterator>
using namespace std;
//typedefs
typedef string::const_iterator str_it;
//declarations
//use const ref and iterators to save time on copying and erasing
bool TransformLine(const string & bits, str_it bits_front, const string & chars, str_it chars_front);
int main(int argc, char* argv[])
{
//check there are at least two command line arguments: binary executable and file name
//ignore additional arguments
if(argc < 2)
{
cout << "Invalid command line argument. No input file name provided." << "\n"
<< "Goodybe...";
return -1;
}
//create input stream and open file
ifstream in;
in.open(argv[1], ios::in);
while(!in.is_open())
{
char* name;
cout << "Invalid file name. Please enter file name: ";
cin >> name;
in.open(name, ios::in);
}
//variables
string line_bits, line_chars;
//reserve space up to constraints to reduce resizing time later
line_bits.reserve(150);
line_chars.reserve(1000);
int line = 0;
//loop over lines (<=50 by constraint, ignore the rest)
while((in >> line_bits >> line_chars) && (line < 50))
{
line++;
//impose bit and char constraints
if(line_bits.length() > 150 ||
line_chars.length() > 1000)
continue; //skip this line
(TransformLine(line_bits, line_bits.begin(), line_chars, line_chars.begin()) == true) ? (cout << "Yes\n") : (cout << "No\n");
}
//close file
in.close();
return 0;
}
bool TransformLine(const string & bits, str_it bits_front, const string & chars, str_it chars_front)
{
//using iterators so store current length as local const
//can make these const because they're not altered here
int bits_length = distance(bits_front, bits.end());
int chars_length = distance(chars_front, chars.end());
//check success rule
if(bits_length == 0 && chars_length == 0)
return true;
//Check fail rules:
//1. next bit is 0 but next char is B
//2. bits length is zero (but char is not, by previous if)
//3. char length is zero (but bits length is not, by previous if)
if((*bits_front == '0' && *chars_front == 'B') ||
bits_length == 0 ||
chars_length == 0)
return false;
//we now know that chars_length != 0 => chars_front != chars.end()
//kill a bit and then call recursively with each possible reduction of front char group
bits_length = distance(++bits_front, bits.end());
//current char group tracker
const char curr_char_type = *chars_front; //use const so compiler can optimize
int curr_pos = distance(chars.begin(), chars_front); //position of current front in char string
//since chars are 0-indexed, the following is also length of current char group
//start searching from curr_pos and length is relative to curr_pos so subtract it!!!
int curr_group_length = chars.find_first_not_of(curr_char_type, curr_pos)-curr_pos;
//make sure this isn't the last group!
if(curr_group_length < 0 || curr_group_length > chars_length)
curr_group_length = chars_length; //distance to end is precisely distance(chars_front, chars.end()) = chars_length
//kill the curr_char_group
//if curr_group_length = char_length then this will make chars_front = chars.end()
//and this will mean that chars_length will be 0 on next recurssive call.
chars_front += curr_group_length;
curr_pos = distance(chars.begin(), chars_front);
//call recursively, adding back a char from the current group until 1 less than starting point
int added_back = 0;
while(added_back < curr_group_length)
{
if(TransformLine(bits, bits_front, chars, chars_front))
return true;
//insert back one char from the current group
else
{
added_back++;
chars_front--; //represents adding back one character from the group
}
}
//if here then all recursive checks failed so initial must fail
return false;
}
They give the following test cases, which my code solves correctly:
Sample input:
1| 1010 AAAAABBBBAAAA
2| 00 AAAAAA
3| 01001110 AAAABAAABBBBBBAAAAAAA
4| 1100110 BBAABABBA
Correct output:
1| Yes
2| Yes
3| Yes
4| No
Since a transformation is possible if and only if copies of it are, I tried just copying each binary and letter sequences onto itself various times and seeing how the clock goes. Even for very long bit and character strings and many lines it has finished in under 10 seconds.
My question is: since CodeEval is still saying it is running longer than 10 seconds but they don't share their input, does anyone have any further suggestions to improve the performance of this recursion? Or maybe a totally different approach?
Thank you in advance for your help!
Here's what I found:
Pass by constant reference
Strings and other large data structures should be passed by constant reference.
This allows the compiler to pass a pointer to the original object, rather than making a copy of the data structure.
Call functions once, save result
You are calling bits.length() twice. You should call it once and save the result in a constant variable. This allows you to check the status again without calling the function.
Function calls are expensive for time critical programs.
Use constant variables
If you are not going to modify a variable after assignment, use the const in the declaration:
const char curr_char_type = chars[0];
The const allows compilers to perform higher order optimization and provides safety checks.
Change data structures
Since you are perform inserts maybe in the middle of a string, you should use a different data structure for the characters. The std::string data type may need to reallocate after an insertion AND move the letters further down. Insertion is faster with a std::list<char> because a linked list only swaps pointers. There may be a trade off because a linked list needs to dynamically allocate memory for each character.
Reserve space in your strings
When you create the destination strings, you should use a constructor that preallocates or reserves room for the largest size string. This will prevent the std::string from reallocating. Reallocations are expensive.
Don't erase
Do you really need to erase characters in the string?
By using starting and ending indices, you overwrite existing letters without have to erase the entire string.
Partial erasures are expensive. Complete erasures are not.
For more assistance, post to Code Review at StackExchange.
This is a classic recursion problem. However, a naive implementation of the recursion would lead to an exponential number of re-evaluations of a previously computed function value. Using a simpler example for illustration, compare the runtime of the following two functions for a reasonably large N. Lets not worry about the int overflowing.
int RecursiveFib(int N)
{
if(N<=1)
return 1;
return RecursiveFib(N-1) + RecursiveFib(N-2);
}
int IterativeFib(int N)
{
if(N<=1)
return 1;
int a_0 = 1, a_1 = 1;
for(int i=2;i<=N;i++)
{
int temp = a_1;
a_1 += a_0;
a_0 = temp;
}
return a_1;
}
You would need to follow a similar approach here. There are two common ways of approaching the problem - dynamic programming and memoization. Memoization is the easiest way of modifying your approach. Below is a memoized fibonacci implementation to illustrate how your implementation can be speeded up.
int MemoFib(int N)
{
static vector<int> memo(N, -1);
if(N<=1)
return 1;
int& res = memo[N];
if(res!=-1)
return res;
return res = MemoFib(N-1) + MemoFib(N-2);
}
Your failure message is "Aborted after 10 seconds" -- implying that the program was working fine as far as it went, but it took too long. This is understandable, given that your recursive program takes exponentially more time for longer input strings -- it works fine for the short (2-8 digit) strings, but will take a huge amount of time for 100+ digit strings (which the test allows for). To see how your running time goes up, you should construct yourself some longer test inputs and see how long they take to run. Try things like
0000000011111111 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBAAAAAAAA
00000000111111110000000011111111 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBAAAAAAAA
and longer. You need to be able to handle up to 150 digits and 1000 letters.
At CodeEval, you can submit a "solution" that just outputs what the input is, and do that to gather their test set. They may have variations so you may wish to submit it a few times to gather more samples. Some of them are too difficult to solve manually though... the ones you can solve manually will also run very quickly at CodeEval too, even with inefficient solutions, so there's that to consider.
Anyway, I did this same problem at CodeEval (using VB of all things), and my solution recursively looked for the "next index" of both A and B depending on what the "current" index is for where I was in a translation (after checking stoppage conditions first thing in the recursive method). I did not use memoization but that might've helped speed it up even more.
PS, I have not run your code, but it does seem curious that the recursive method contains a while loop within which the recursive method is called... since it's already recursive and should therefore encompass every scenario, is that while() loop necessary?
I am writing my own Huffman encoder, and so far I have created the Huffman tree by using a minHeap to pop off the two lowest frequency nodes and make a node that links to them and then pushing the new node back one (lather, rinse, repeat until only one node).
So now I have created the tree, but I need to use this tree to assign codes to each character. My problem is I don't know how to store the binary representation of a number in C++. I remember reading that unsigned char is the standard for a byte, but I am unsure.
I know I have to recusively traverse the tree and whenever I hit a leaf node I must assign the corresponding character whatever code is current representing the path.
Here is what I have so far:
void traverseFullTree(huffmanNode* root, unsigned char curCode, unsigned char &codeBook){
if(root->leftChild == 0 && root->rightChild == 0){ //you are at a leaf node, assign curCode to root's character
codeBook[(int)root->character] = curCode;
}else{ //root has children, recurse into them with the currentCodes updated for right and left branch
traverseFullTree(root->leftChild, **CURRENT CODE SHIFTED WITH A 0**, codeBook );
traverseFullTree(root->rightChild, **CURRENT CODE SHIFTED WITH A 1**, codeBook);
}
return 0;
}
CodeBook is my array that has a place for the codes of up to 256 characters (for each possible character in ASCII), but I am only going to actually assign codes to values that appear in the tree.
I am not sure if this is the corrent way to traverse my Huffman tree, but this is what immediately seems to work (though I haven't tested it). Also how do I call the traverse function of the root of the whole tree with no zeros OR ones (the very top of the tree)?
Should I be using a string instead and appending to the string either a zero or a 1?
Since computers are binary ... ALL numbers in C/C++ are already in binary format.
int a = 10;
The variable a is binary number.
What you want to look at is bit manipulation, operators such as & | << >>.
With the Huffman encoding, you would pack the data down into an array of bytes.
It's been a long time since I've written C, so this is an "off-the-cuff" pseudo-code...
Totally untested -- but should give you the right idea.
char buffer[1000]; // This is the buffer we are writing to -- calc the size out ahead of time or build it dynamically as go with malloc/ remalloc.
void set_bit(bit_position) {
int byte = bit_position / 8;
int bit = bit_position % 8;
// From http://stackoverflow.com/questions/47981/how-do-you-set-clear-and-toggle-a-single-bit-in-c
byte |= 1 << bit;
}
void clear_bit(bit_position) {
int byte = bit_position / 8;
int bit = bit_position % 8;
// From http://stackoverflow.com/questions/47981/how-do-you-set-clear-and-toggle-a-single-bit-in-c
bite &= ~(1 << bit);
}
// and in your loop, you'd just call these functions to set the bit number.
set_bit(0);
clear_bit(1);
Since the curCode has only zero and one as its value, BitSet might suit your need. It is convenient and memory-saving. Reference this: http://www.sgi.com/tech/stl/bitset.html
Only a little change to your code:
void traverseFullTree(huffmanNode* root, unsigned char curCode, BitSet<N> &codeBook){
if(root->leftChild == 0 && root->rightChild == 0){ //you are at a leaf node, assign curCode to root's character
codeBook[(int)root->character] = curCode;
}else{ //root has children, recurse into them with the currentCodes updated for right and left branch
traverseFullTree(root->leftChild, **CURRENT CODE SHIFTED WITH A 0**, codeBook );
traverseFullTree(root->rightChild, **CURRENT CODE SHIFTED WITH A 1**, codeBook);
}
return 0;
}
how to store the binary representation of a number in C++
You can simply use bitsets
#include <iostream>
#include <bitset>
int main() {
int a = 42;
std::bitset<(sizeof(int) * 8)> bs(a);
std::cout << bs.to_string() << "\n";
std::cout << bs.to_ulong() << "\n";
return (0);
}
as you can see they also provide methods for conversions to other types, and the handy [] operator.
Please don't use a string.
You can represent the codebook as two arrays of integers, one with the bit-lengths of the codes, one with the codes themselves. There is one issue with that: what if a code is longer than an integer? The solution is to just not make that happen. Having a short-ish maximum codelength (say 15) is a trick used in most practical uses of Huffman coding, for various reasons.
I recommend using canonical Huffman codes, and that slightly simplifies your tree traversal: you'd only need the lengths, so you don't have to keep track of the current code. With canonical Huffman codes, you can generate the codes easily from the lengths.
If you are using canonical codes, you can let the codes be wider than integers, because the high bits would be zero anyway. However, it is still a good idea to limit the lengths. Having a short maximum length (well not too short, that would limit compression, but say around 16) enables you to use the simplest table-based decoding method, a simple single-level table.
Limiting code lengths to 25 or less also slightly simplifies encoding, it lets you use a 32bit integer as a "buffer" and empty it byte by byte, without any special handling of the case where the buffer holds fewer than 8 bits but encoding the current symbol would overflow it (because that case is entirely avoided - in the worst case there would be 7 bits in the buffer and you try to encode a 25-bit symbol, which works just fine).
Something like this (not tested in any way)
uint32_t buffer = 0;
int bufbits = 0;
for (int i = 0; i < symbolCount; i++)
{
int s = symbols[i];
buffer <<= lengths[s]; // make room for the bits
bufbits += lengths[s]; // buffer got longer
buffer |= values[s]; // put in the bits corresponding to the symbol
while (bufbits >= 8) // as long as there is at least a byte in the buffer
{
bufbits -= 8; // forget it's there
writeByte((buffer >> bufbits) & 0xFF); // and save it
}
}
i've done in the past a small exercise about hashtable but the user was giving array size and also the struct was like this (so the user was giving a number and a word each time as input)
struct data
{
int key;
char c[20];
};
So it was quite simple since i knew the array size and also the user was saying how many items he will be give as input. The way i did it was
Hashing the keys the user gave me
find the position array[hashed(key)] in the array
if it was empty i would put the data there
if it wasn't i would put it in the next free position i would find.
But now i have to make inverted index and i am reasearching so i can make a hashtable for it. So the words will be collected from around 30 txts and they will be so many.
So in this case how long should the array be? How can i hash words? Should i use hasing with open adressing or with chaining. The exercise sais that we could use a hash table as it is if we find it online. but i prefer to understand and create it by my own. Any clues will help me :)
In this exerice(inverted index using hash table) the structs looks like this.
data type is the type of the hash table i will create.
struct posting
{
string word;
posting *next
}
struct data
{
string word;
posting *ptrpostings;
data *next
};
Hashing can be done anyway you choose. Suppose that the string is ABC. You can employ hashing as A=1, B=2, C=3, Hash = 1+2+3/(length = 3) = 2. But, this is very primitive.
The size of the array will depend on the hash algorithm that you deploy, but it is better to choose an algorithm that returns a definite length hash for every string. For example, if you chose to go with SHA1, you can safely allocate 40 bytes per hash. Refer Storing SHA1 hash values in MySQL Read up on the algorithm http://en.wikipedia.org/wiki/SHA-1. I believe that it can be safely used.
On the other hand, if it just for a simple exercise, you can also use MD5 hash. I wouldn't recommend using it in practical purposes as its rainbow tables are available easily :)
---------EDIT-------
You can try to implement like this ::
#include <iostream>
#include <string>
#include <stdlib.h>
#include <stdio.h>
#define MAX_LEN 30
using namespace std;
typedef struct
{
string name; // for the filename
... change this to your specification
}hashd;
hashd hashArray[MAX_LEN]; // tentative
int returnHash(string s)
{
// A simple hashing, no collision handled
int sum=0,index=0;
for(string::size_type i=0; i < s.length(); i++)
{
sum += s[i];
}
index = sum % MAX_LEN;
return index;
}
int main()
{
string fileName;
int index;
cout << "Enter filename ::\t" ;
cin >> fileName;
cout << "Enter filename is ::\t" + fileName << "\n";
index = returnHash(fileName);
cout << "Generated index is ::\t" << index << "\n";
hashArray[index].name = fileName;
cout << "Filename in array ::\t" <<hashArray[index].name ;
return 0;
}
Then, to achieve O(1), anytime you want to fetch the filename's contents, just run the returnHash(filename) function. It will directly return the index of the array :)
A hash table can be implemented as a simple 2-dimensional array. The question is how to compute the unique key for each item to be stored. Some things have keys built into the data, and for other things you'll have to compute one: MD5 as suggested above is probably just fine for your needs.
The next problem you need to solve is how to lay out, or size, your hash table. That's something that you'll ultimately need to tune to your own needs through some testing. You might start by setting up the 1st dimension of your array with 255 entries -- one for each combination of the first 2 digits of the MD5 hash. Whenever you have a collision, you add another entry along the 2nd dimension of your array at that 1st dimension index. This means that you'll statically define a 1-dimensional array while dynamically allocating the 2nd dimension entries as needed. Hopefully that makes as much sense to you as it does to me.
When doing lookups, you can immediately find the right 1st dimension index using the 1st 2-digits of the MD5 hash. Then a relativley short linear search along the 2nd dimension will quickly bring you to the item you seek.
You might find from experimentation that it's more efficient to use a larger 1st dimension (use the fisrt 3 digits of the MD5 hash) if your data set is sufficiently large. Depending on the size of texts involved and the distribution of their use of the lexicon, your results will probably dictate some of your architecture.
On the other hand, you might just start small and build in some intelligence to automatically resize and layout your table. If your table gets too long in either direction, performance will suffer.
I have build the huffman tree. But I have no idea to store the code to bits due to I don't know how
to handle the variable length.
I want to create a table that store the huffman code in bits for print the encoded result.
I cannot use the STL containter like bitset.
I have try like that
void traverse( string code = "")const
{
if( frequency == 0 ) return;
if ( left ) {
left->traverse( code + '0' );
right->traverse( code + '1' );
}
else {//leaf node
huffmanTable[ch] = code;
}
}
Can you give me some algorithm to handle it?
I want to store the '0' use 1 bit and "1" use 1 bit.
Thx in advance.
You'll need a buffer, a variable to track the size of the buffer in bytes, and a variable to track the number of valid bits in the buffer.
To store a bit:
Check if adding a bit will increase the number of bytes stored. If not, skip to step 4.
Is there room in the buffer to store an additional byte? If so, skip to step 4.
Reallocate a storage buffer a few bytes larger. Copy the existing data. Increase the variable holding the size of the buffer.
Compute the byte position and bit position at which the next bit will be stored. Set or clear that bit as appropriate.
Increment the variable holding the number of bits stored.
You can use a fixed size structure to store the table and just bits to store encoded input:
struct TableEntry {
uint8_t size;
uint8_t code;
};
TableEntry huffmanTable[256];
void traverse(uint8_t size; uint8_t code) const {
if( frequency == 0 ) return;
if ( left ) {
left->traverse(size+1, code << 1 );
right->traverse(size+1, (code << 1) | 1 );
}
else {//leaf node
huffmanTable[ch].code = code;
huffmanTable[ch].size = size;
}
}
For encoding, you can use the algorithm posted by David.
Basically I'd use one of two different approaches here, based on the maximum key length/depth of the tree:
If you've got a fixed length and it's shorter than your available integer data types (like long int), you can use the approach shown by perreal.
If you don't know the maximum depth and think you might be running out of space, I'd use std::vector<bool> as the code value. This is a special implementation of the vector using a single bit per value (essentially David's approach).
Here is the problem: Remove specified characters from a given string.
Input: The string is "Hello World!" and characters to be deleted are "lor"
Output: "He Wd!"
Solving this involves two sub-parts:
Determining if the given character is to be deleted
If so, then deleting the character
To solve the first part, I am reading the characters to be deleted into a std::unordered_map, i.e. I parse the string "lor" and insert each character into the hashmap. Later, when I am parsing the main string, I will look into this hashmap with each character as the key and if the returned value is non-zero, then I delete the character from the string.
Question 1: Is this the best approach?
Question 2: Which would be better for this problem? std::map or std::unordered_map? Since I am not interested in ordering, I used an unordered_map. But is there a higher overhead for creating the hash table? What to do in such situations? Use a map (balanced tree) or a unordered_map (hash table)?
Now coming to the next part, i.e. deleting the characters from the string. One approach is to delete the character and shift the data from that point on, back by one position. In the worst case, where we have to delete all the characters, this would take O(n^2).
The second approach would be to copy only the required characters to another buffer. This would involve allocating enough memory to hold the original string and copy over character by character leaving out the ones that are to be deleted. Although this requires additional memory, this would be a O(n) operation.
The third approach, would be to start reading and writing from the 0th position, increment the source pointer when every time I read and increment the destination pointer only when I write. Since source pointer will always be same or ahead of destination pointer, I can write over the same buffer. This saves memory and is also an O(n) operation. I am doing the same and calling resize in the end to remove the additional unnecessary characters?
Here is the function I have written:
// str contains the string (Hello World!)
// chars contains the characters to be deleted (lor)
void remove_chars(string& str, const string& chars)
{
unordered_map<char, int> chars_map;
for(string::size_type i = 0; i < chars.size(); ++i)
chars_map[chars[i]] = 1;
string::size_type i = 0; // source
string::size_type j = 0; // destination
while(i < str.size())
{
if(chars_map[str[i]] != 0)
++i;
else
{
str[j] = str[i];
++i;
++j;
}
}
str.resize(j);
}
Question 3: What are the different ways by which I can improve this function. Or is this best we can do?
Thanks!
Good job, now learn about the standard library algorithms and boost:
str.erase(std::remove_if(str.begin(), str.end(), boost::is_any_of("lor")), str.end());
Assuming that you're studying algorithms, and not interested in library solutions:
Hash tables are most valuable when the number of possible keys is large, but you only need to store a few of them. Your hash table would make sense if you were deleting specific 32-bit integers from digit sequences. But with ASCII characters, it's overkill.
Just make an array of 256 bools and set a flag for the characters you want to delete. It only uses one table lookup instruction per input character. Hash map involves at least a few more instructions to compute the hash function. Space-wise, they are probably no more compact once you add up all the auxiliary data.
void remove_chars(string& str, const string& chars)
{
// set up the look-up table
std::vector<bool> discard(256, false);
for (int i = 0; i < chars.size(); ++i)
{
discard[chars[i]] = true;
}
for (int j = 0; j < str.size(); ++j)
{
if (discard[str[j]])
{
// do something, depending on your storage choice
}
}
}
Regarding your storage choices: Choose between options 2 and 3 depending on whether you need to preserve the input data or not. 3 is obviously most efficient, but you don't always want an in-place procedure.
Here is a KISS solution with many advantages:
void remove_chars (char *dest, const char *src, const char *excludes)
{
do {
if (!strchr (excludes, *src))
*dest++ = *src;
} while (*src++);
*dest = '\000';
}
You can ping pong between strcspn and strspn to avoid the need for a hash table:
void remove_chars(
const char *input,
char *output,
const char *characters)
{
const char *next_input= input;
char *next_output= output;
while (*next_input!='\0')
{
int copy_length= strspn(next_input, characters);
memcpy(next_output, next_input, copy_length);
next_output+= copy_length;
next_input+= copy_length;
next_input+= strcspn(next_input, characters);
}
}