How do I write a function in C++ that takes a string s and an integer n as input and gives at output a string that has spaces placed every n characters in s?
For example, if the input is s = "abcdefgh" and n = 3 then the output should be "abc def gh"
EDIT:
I could have used loops for this but I am looking for concise and an idiomatic C++ solution (i.e. the one that uses algorithms from STL).
EDIT:
Here's how I would I do it in Scala (which happens to be my primary language):
def drofotize(s: String, n: Int) = s.grouped(n).toSeq.flatMap(_ + " ").mkString
Is this level of conciseness possible with C++? Or do I have to use explicit loops after all?
Copy each character in a loop and when i>0 && i%(n+1)==0 add extra space in the destination string.
As for Standard Library you could write your own std::back_inserter which will add extra spaces and then you could use it as follows:
std::copy( str1.begin(), str1.end(), my_back_inserter(str2, n) );
but I could say that writing such a functor is just a wasting of your time. It is much simpler to write a function copy_with_spaces with an old good for-loop in it.
STL algorithms don't really provide anything like this. Best I can think of:
#include <string>
using namespace std;
string drofotize(const string &s, size_t n)
{
if (s.size() <= n)
{
return s;
}
return s.substr(0,n) + " " + drofotize(s.substr(n), n);
}
Related
Say I have a vector of values from a tokenizing function, tokenize(). I know it will only have two values. I want to store the first value in a and the second in b. In Python, I would do:
a, b = string.split(' ')
I could do it as such in an ugly way:
vector<string> tokens = tokenize(string);
string a = tokens[0];
string b = tokens[1];
But that requires two extra lines of code, an extra variable, and less readability.
How would I do such a thing in C++ in a clean and efficient way?
EDIT: I must emphasize that efficiency is very important. Too many answers don't satisfy this. This includes modifying my tokenization function.
EDIT 2: I am using C++11 for reasons outside of my control and I also cannot use Boost.
With structured bindings (definitely will be in C++17), you'd be able to write something like:
auto [a,b] = as_tuple<2>(tokenize(str));
where as_tuple<N> is some to-be-declared function that converts a vector<string> to a tuple<string, string, ... N times ...>, probably throwing if the sizes don't match. You can't destructure a std::vector since it's size isn't known at compile time. This will necessarily do extra moves of the string so you're losing some efficiency in order to gain some code clarity. Maybe that's ok.
Or maybe you write a tokenize<N> that returns a tuple<string, string, ... N times ...> directly, avoiding the extra move. In that case:
auto [a, b] = tokenize<2>(str);
is great.
Before C++17, what you have is what you can do. But just make your variables references:
std::vector<std::string> tokens = tokenize(str);
std::string& a = tokens[0];
std::string& b = tokens[1];
Yeah, it's a couple extra lines of code. That's not the end of the world. It's easy to understand.
If you "know it will only have two values", you could write something like:
#include <cassert>
#include <iostream>
#include <string>
#include <tuple>
std::pair<std::string, std::string> tokenize(const std::string &text)
{
const auto pos(text.find(' '));
assert(pos != std::string::npos);
return {text.substr(0, pos), text.substr(pos + 1)};
}
your code is a great example of the power of STL but it's probably a bit slower.
int main()
{
std::string a, b;
std::tie(a, b) = tokenize("first second");
std::cout << a << " " << b << '\n';
}
Unfortunately without structured bindings (C++17) you have to use the std::tie hack and the variables a and b have to exist.
Ideally you'd rewrite the tokenize() function so that it returns a pair of strings rather than a vector:
std::pair<std::string, std::string> tokenize(const std::string& str);
Or you would pass two references to empty strings to the function as parameters.
void tokenize(const std::string& str, std::string& result_1, std::string& result_2);
If you have no control over the tokenize function the best you can do is move the strings out of the vector in an optimal way.
std::vector<std::string> tokens = tokenize(str);
std::string a = std::move(tokens.first());
std::string b = std::move(tokens.last());
I'm trying to write a short and stupid equation parser, and need to split a string around a given operator. I can split off the right side of a string by doing
return std::string(oprtr + 1, equ.end());
where equ is the string, and oprtr is an iterator for the position I need to split from. Doing this works perfectly, but splitting off the left, however, doesn't:
return std::string(equ.begin(), oprtr - 1);
====
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::_S_create
I've tried a variety of other nasty workarounds that I'm really not proud of, like
return equ.substr(0, std::distance(equ.begin(), oprtr));
This one doesn't give errors, but actually just returns the entire equation. What am I doing wrong here?
Works for me with g++ 4.8.2:
#include <string>
#include <algorithm>
#include <iostream>
int main() {
std::string eq("a+b=c");
std::string::iterator opit = std::find(eq.begin(),eq.end(),'=');
std::string lhs = std::string(eq.begin(),opit);
std::cout << "lhs: " << lhs << "\n";
return 0;
}
The output is:
lhs: a+b
Seems you are doing something like this
void my_func(string equ, string::iterator oprtr)
{
string left = std::string(equ.begin(), oprtr);
}
string::iterator oprtr = equ.find('=');
my_func(equ, oprtr);
That won't work because in my_func you have two iterators to different strings. Because the original string is copied when you call my_func.
One fix is to pass by reference
void my_func(string& equ, string::iterator oprtr)
Another fix is to use integers instead of iterators. Integers aren't tied to one particular string instance like iterators are.
I wrote a CGI script for my website which reads through blocks of text and matches all occurrences of English words. I've been making some fundamental changes to the site's code recently which have necessitated rewriting most of it in C++. As I'd hoped, almost everything has become much faster in C++ than perl, with the exception of this function.
I know that regexes are a relatively recent addition to C++ and not necessarily its strongest suit. It may simply be the case that it is slower than perl in this instance. But I wanted to share my code in the hopes that someone might be able to find a way of speeding up what I am doing in C++.
Here is the perl code:
open(WORD, "</file/path/wordthree.txt") || die "opening";
while(<WORD>) {
chomp;
push #wordlist,$_;
}
close (WORD) || die "closing";
foreach (#wordlist) {
while ($bloc =~ m/$_/g) {
$location = pos($bloc) - length($_);
$match=$location.";".pos($bloc).";".$_;
push(#hits,$match);
}
}
wordthree.txt is a list of ~270,000 English words separated by new lines, and $bloc is 3200 characters of text. Perl performs these searches in about one second. You can see it in play here if you like: http://libraryofbabel.info/anglishize.cgi?05y-w1-s3-v20:1
With C++ I have tried the following:
typedef std::map<std::string::difference_type, std::string> hitmap;
hitmap hits;
void regres(const boost::match_results<std::string::const_iterator>& what) {
hits[what.position()]=what[0].str();
}
words.open ("/file/path/wordthree.txt");
std::string wordlist[274784];
unsigned i = 0;
while (words >> wordlist[i]) {i++;}
words.close();
for (unsigned i=0;i<274783;i++) {
boost::regex word(wordlist[i]);
boost::sregex_iterator lex(book.begin(),book.end(), word);
boost::sregex_iterator end;
std::for_each(lex, end, ®res);
}
The C++ version takes about 12 seconds to read the same amount of text the same number of times. Any advice on how to make it competitive with the perl script is greatly appreciated.
Firstly I'd cut down on the number of allocations:
use string_ref instead of std::string where possible
use mapped files instead of reading it all in memory ahead of time
use const char* instead std::string::const_iterator to navigate the book
Here is a sample that uses Boost Spirit Qi to parse the wordlist (I don't have yours, so I assume line-separated words).
std::vector<sref> wordlist;
io::mapped_file_source mapped("/etc/dictionaries-common/words");
qi::parse(mapped.begin(), mapped.end(), qi::raw[+(qi::char_ - qi::eol)] % qi::eol, wordlist);
In full Live On Coliru¹
#include <boost/regex.hpp>
#include <boost/utility/string_ref.hpp>
#include <boost/spirit/include/qi.hpp>
#include <boost/iostreams/device/mapped_file.hpp>
namespace qi = boost::spirit::qi;
namespace io = boost::iostreams;
using sref = boost::string_ref;
using regex = boost::regex;
namespace boost { namespace spirit { namespace traits {
template <typename It>
struct assign_to_attribute_from_iterators<sref, It, void> {
static void call(It f, It l, sref& attr) { attr = { f, size_t(std::distance(f,l)) }; }
};
} } }
typedef std::map<std::string::difference_type, sref> hitmap;
hitmap hits;
void regres(const boost::match_results<const char*>& what) {
hits[what.position()] = sref(what[0].first, what[0].length());
}
int main() {
std::vector<sref> wordlist;
io::mapped_file_source mapped("/etc/dictionaries-common/words");
qi::parse(mapped.begin(), mapped.end(), qi::raw[+(qi::char_ - qi::eol)] % qi::eol, wordlist);
std::cout << "Wordlist contains " << wordlist.size() << " entries\n";
io::mapped_file_source book("/etc/dictionaries-common/words");
for (auto const& s: wordlist) {
regex word(s.to_string());
boost::cregex_iterator lex(book.begin(), book.end(), word), end;
std::for_each(lex, end, ®res);
}
}
Next step
This still creates a regex each iteration. I have a suspicion it will be a lot more efficient if you combine it all into a single pattern. You'll spend more memory/CPU creating the regex, but you'll reduce the power of the loop by the number of entries in the word list.
Because the regex library might not have been designed for this scale, you could have better results with a custom search and a trie implementation.
Here's a simple attempt (that is indeed much faster for my /etc/dictionaries-common/words file of 99171 lines):
Live On Coliru
#include <boost/regex.hpp>
#include <boost/utility/string_ref.hpp>
#include <boost/iostreams/device/mapped_file.hpp>
namespace io = boost::iostreams;
using sref = boost::string_ref;
using regex = boost::regex;
typedef std::map<std::string::difference_type, sref> hitmap;
hitmap hits;
void regres(const boost::match_results<const char*>& what) {
hits[what.position()] = sref(what[0].first, what[0].length());
}
int main() {
io::mapped_file_params params("/etc/dictionaries-common/words");
params.flags = io::mapped_file::mapmode::priv;
io::mapped_file mapped(params);
std::replace(mapped.data(), mapped.end(), '\n', '|');
regex const wordlist(mapped.begin(), mapped.end() - 1);
io::mapped_file_source book("/etc/dictionaries-common/words");
boost::cregex_iterator lex(book.begin(), book.end(), wordlist), end;
std::for_each(lex, end, ®res);
}
¹ of course coliru doesn't have a suitable wordlist
It looks to me like Perl is smart enough to figure out that you're abusing regular expressions to do an ordinary linear search, a straightforward lookup. You are looking up straight text, and none of your search patterns appear to be, well, a pattern. Based on your description, all your search patterns look like ordinary strings, so Perl is likely optimizing it down to a linear string search.
I am not familiar with Boost's internal implementation of regular expression matching, but it's likely that it's compiling each search string into a state machine, and then executing the state machine for each search. That's the usual approach used with generic regular expression implementations. And that's a lot of work. A lot of completely needless work, in this specific case.
What you should do is as follows:
1) You are reading wordthree.txt into an array of strings. Instead of doing it, read it into a std::set<std::string> instead.
2) You are reading the entire text to search into a single book container. It's not clear, based on your code, whether book is a single std::string, or a std::vector<char>. But whatever the case, don't do that. Read the text to search iteratively, one word at a time. For each word, look it up in the std::set, and go from there.
This is, after all, what you're trying to do directly, and you should do that instead of taking a grand detour through the wonders of regular expressions, which accomplishes very little other than wasting a lot of time.
If you implement this correctly, you'll likely see C++ being just as fast, if not faster, than Perl.
I could also think of several other, more aggresively optimized approaches which also leverage std::set, but with custom classes and comparators, that seek to avoid all the heap allocations inherent with using a bunch of std::string, but it probably won't be necessary. A basic approach using a std::set-based lookup should be fast enough.
Given the following string, "Hi ~+ and ^*. Is ^* still flying around ~+?"
I want to replace all occurrences of "~+" and "^*" with "Bobby" and "Danny", so the string becomes:
"Hi Bobby and Danny. Is Danny still flying around Bobby?"
I would prefer not to have to call Boost replace function twice to replace the occurrences of the two different values.
I managed to implement the required replacement function using Boost.Iostreams. Specifically, the method I used was a filtering stream using regular expression to match what to replace. I am not sure about the performance on gigabyte sized files. You will need to test it of course. Anyway, here's the code:
#include <boost/regex.hpp>
#include <boost/iostreams/filter/regex.hpp>
#include <boost/iostreams/filtering_stream.hpp>
#include <iostream>
int main()
{
using namespace boost::iostreams;
regex_filter filter1(boost::regex("~\\+"), "Bobby");
regex_filter filter2(boost::regex("\\^\\*"), "Danny");
filtering_ostream out;
out.push(filter1);
out.push(filter2);
out.push(std::cout);
out << "Hi ~+ and ^*. Is ^* still flying around ~+?" << std::endl;
// for file conversion, use this line instead:
//out << std::cin.rdbuf();
}
The above prints "Hi Bobby and Danny. Is Danny still flying around Bobby?" when run, just like expected.
It would be interesting to see the performance results, if you decide to measure it.
Daniel
Edit: I just realized that regex_filter needs to read the entire character sequence into memory, making it pretty useless for gigabyte-sized inputs. Oh well...
I did notice it's been a year since this was active, but for what it's worth. I came across an article on CodeProject today that claims to solve this problem - maybe you can use ideas from there:
I can't vouch for its correctness, but might be worth taking a look at. :)
The implementation surely requires holding the entire string in memory, but you can easily work around that (as with any other implementation that performs the replacements) as long as you can split the input into blocks and guarantee that you never split at a position that is inside a symbol to be replaced. (One easy way to do that in your case is to split at a position where the next char isn't any of the chars used in a symbol.)
--
There is a reason beyond performance (though that is a sufficient reason in my book) to add a "ReplaceMultiple" method to one's string library: Simply doing the replace operation N times is NOT correct in general.
If the values that are substituted for the symbols are not constrained, values can end up being treated as symbols in subsequent replace operations. (There could be situations where you'd actually want this, but there are definitely cases where you don't. Using strange-looking symbols reduces the severity of the problem, but doesn't solve it, and "is ugly" because the strings to be formatted may be user-defineable - and so should not require exotic characters.)
However, I suspect there is a good reason why I can't easily find a general multi-replace implementation. A "ReplaceMultiple" operation simply isn't (obviously) well-defined in general.
To see this, consider what it might mean to "replace 'aa' with '!' and 'baa' with '?' in the string 'abaa'"? Is the result 'ab!' or 'a?' - or is such a replacement illegal?
One could require symbols to be "prefix-free", but in many cases that'd be unacceptable. Say I want to use this to format some template text. And say my template is for code. I want to replace "§table" with a database table name known only at runtime. It'd be annoying if I now couldn't use "§t" in the same template. The templated script could be something completely generic, and lo-and-behold, one day I encounter the client that actually made use of "§" in his table names... potentially making my template library rather less useful.
A perhaps better solution would be to use a recursive-descent parser instead of simply replacing literals. :)
Very late answer but none of answers so far give a solution.
With a bit of Boost Spirit Qi you can do this substitution in one pass, with extremely high efficiency.
#include <iostream>
#include <string>
#include <string_view>
#include <map>
#include <boost/spirit/include/qi.hpp>
#include <boost/fusion/adapted.hpp>
namespace bsq = boost::spirit::qi;
using SUBSTITUTION_MAP = std::map<std::string, std::string>;//,std::string>;
template <typename InputIterator>
struct replace_grammar
: bsq::grammar<InputIterator, std::string()>
{
replace_grammar(const SUBSTITUTION_MAP& substitution_items)
: replace_grammar::base_type(main_rule)
{
for(const auto& [key, value] : substitution_items) {
replace_items.add(key,value);
}
main_rule = *( replace_items [( [](const auto &val, auto& context) {
auto& res = boost::fusion::at_c<0>(context.attributes);
res += val; })]
|
bsq::char_
[( [](const auto &val, auto& context) {
auto& res = boost::fusion::at_c<0>(context.attributes);
res += val; })] );
}
private :
bsq::symbols<char, std::string> replace_items;
bsq::rule<InputIterator, std::string()> main_rule;
};
std::string replace_items(std::string_view input, const SUBSTITUTION_MAP& substitution_items)
{
std::string result;
result.reserve(input.size());
using iterator_type = std::string_view::const_iterator;
const replace_grammar<iterator_type> p(substitution_items);
if (!bsq::parse(input.begin(), input.end(), p, result))
throw std::logic_error("should not happen");
return result;
}
int main()
{
std::cout << replace_items("Hi ~+ and ^*. Is ^* still flying around ~+?",{{"~+", "Bobby"} , { "^*", "Danny"}});
}
The qi::symbol is essentially doing the job you ask for , i.e searching the given keys and replace with the given values.
https://www.boost.org/doc/libs/1_79_0/libs/spirit/doc/html/spirit/qi/reference/string/symbols.html
As said in the doc it builds behind the scene a Ternary Search Tree, which means that it is more efficient that searching n times the string for each key.
Boost string_algo does have a replace_all function. You could use that.
I suggest using the Boost Format library. Instead of ~+ and ^* you then use %1% and %2% and so on, a bit more systematically.
Example from the docs:
cout << boost::format("writing %1%, x=%2% : %3%-th try") % "toto" % 40.23 % 50;
// prints "writing toto, x=40.230 : 50-th try"
Cheers & hth.,
– Alf
I would suggest using std::map. So you have a set of replacements, so do:
std::map<std::string,std::string> replace;
replace["~+"]=Bobby;
replace["^*"]=Danny;
Then you could put the string into a vector of strings and check to see if each string occurs in the map and if it does replace it, you'd also need to take off any punctuation marks from the end. Or add those to the replacements. You could then do it in one loop. I'm not sure if this is really more efficient or useful than boost though.
I need to check whether an std:string begins with "xyz". How do I do it without searching through the whole string or creating temporary strings with substr().
I would use compare method:
std::string s("xyzblahblah");
std::string t("xyz")
if (s.compare(0, t.length(), t) == 0)
{
// ok
}
An approach that might be more in keeping with the spirit of the Standard Library would be to define your own begins_with algorithm.
#include <algorithm>
using namespace std;
template<class TContainer>
bool begins_with(const TContainer& input, const TContainer& match)
{
return input.size() >= match.size()
&& equal(match.begin(), match.end(), input.begin());
}
This provides a simpler interface to client code and is compatible with most Standard Library containers.
Look to the Boost's String Algo library, that has a number of useful functions, such as starts_with, istart_with (case insensitive), etc. If you want to use only part of boost libraries in your project, then you can use bcp utility to copy only needed files
It seems that std::string::starts_with is inside C++20, meanwhile std::string::find can be used
std::string s1("xyzblahblah");
std::string s2("xyz")
if (s1.find(s2) == 0)
{
// ok, s1 starts with s2
}
I feel I'm not fully understanding your question. It looks as though it should be trivial:
s[0]=='x' && s[1]=='y' && s[2]=='z'
This only looks at (at most) the first three characters. The generalisation for a string which is unknown at compile time would require you to replace the above with a loop:
// look for t at the start of s
for (int i=0; i<s.length(); i++)
{
if (s[i]!=t[i])
return false;
}