can an std::promise be made from a non-POD object? - c++

One of the things my app does is listen for and receive payloads from a socket. I never want to block. On each payload received, I want to create an object and pass it to a worker thread and forget about it until later which is how the prototype code works. But for the production code I want to reduce complexity (my app is large) by using the convenient async method. async takes a future made from a promise. For that to work I need to create a promise on my non-POD object represented below by the Xxx class. I don't see any way to do that (see error in my sample code below). Is it appropriate to use async here? If so, how can I construct a promise/future object that is more complex than int (all code examples I've see either use int or void):
#include <future>
class Xxx //non-POD object
{
int i;
public:
Xxx( int i ) : i( i ) {}
int GetSquare() { return i * i; }
};
int factorial( std::future< Xxx > f )
{
int res = 1;
auto xxx = f.get();
for( int i = xxx.GetSquare(); i > 1; i-- )
{
res *= i;
}
return res;
}
int _tmain( int argc, _TCHAR* argv[] )
{
Xxx xxx( 2 ); // 2 represents one payload from the socket
std::promise< Xxx > p; // error: no appropriate default constructor available
std::future< Xxx > f = p.get_future();
std::future< int > fu = std::async( factorial, std::move( f ) );
p.set_value( xxx );
fu.wait();
return 0;
}

As Mike already answered, it's definitely a bug in the Visual C++ implementation of std::promise, what you're doing should work.
But I'm curious why you need to do it anyway. Maybe there's some other requirement that you've not shown to keep the example simple, but this would be the obvious way to write that code:
#include <future>
class Xxx //non-POD object
{
int i;
public:
Xxx( int i ) : i( i ) {}
int GetSquare() { return i * i; }
};
int factorial( Xxx xxx )
{
int res = 1;
for( int i = xxx.GetSquare(); i > 1; i-- )
{
res *= i;
}
return res;
}
int main()
{
Xxx xxx( 2 ); // 2 represents one payload from the socket
std::future< int > fu = std::async( factorial, std::move( xxx ) );
int fact = fu.get();
}

It sounds like your implementation is defective. There should be no need for a default constructor (per the general library requirements of [utility.arg.requirements]), and GCC accepts your code (after changing the weird Microsoftish _tmain to a standard main).
I'd switch to a different compiler and operating system. That might not be an option for you, so maybe you could give the class a default constructor to keep it happy.

Related

'future' has been explicitly marked deleted here

I am trying to build a Async application to allow processing of large lists in parallel, and after two days of learning C++ through googling I have come up with the title error, from the following code:
//
// main.cpp
// ThreadedLearning
//
// Created by Andy Kirk on 19/01/2016.
// Copyright © 2016 Andy Kirk. All rights reserved.
//
#include <iostream>
#include <thread>
#include <vector>
#include <chrono>
#include <future>
typedef struct {
long mailing_id;
char emailAddress[100];
} emailStruct ;
typedef struct {
long mailing_id = 0;
int result = 0;
} returnValues;
returnValues work(emailStruct eMail) {
returnValues result;
std::this_thread::sleep_for(std::chrono::seconds(2));
result.mailing_id = eMail.mailing_id;
return result;
}
int main(int argc, const char * argv[]) {
std::vector<emailStruct> Emails;
emailStruct eMail;
// Create a Dummy Structure Vector
for (int i = 0 ; i < 100 ; ++i) {
std::snprintf(eMail.emailAddress,sizeof(eMail.emailAddress),"user-%d#email_domain.tld",i);
eMail.mailing_id = i;
Emails.push_back(eMail);
}
std::vector<std::future<returnValues>> workers;
int worker_count = 0;
int max_workers = 11;
for ( ; worker_count < Emails.size(); worker_count += max_workers ){
workers.clear();
for (int inner_count = 0 ; inner_count < max_workers ; ++inner_count) {
int entry = worker_count + inner_count;
if(entry < Emails.size()) {
emailStruct workItem = Emails[entry];
auto fut = std::async(&work, workItem);
workers.push_back(fut);
}
}
std::for_each(workers.begin(), workers.end(), [](std::future<returnValues> & res) {
res.get();
});
}
return 0;
}
Really not sure what I am doing wrong, and have found limited answers searching. Its on OSX 10 if that is relevant, and XCode 7.
The future class has its copy constructor deleted, because you really don't want to have multiple copies of it.
To add it to the vector, you have to move it instead of copying it:
workers.push_back(std::move(fut));
This error can also be raised if you are passing a future object (within a thread) to a function which expects a pass by value.
For example, this would raise an error when you pass the future:
void multiplyForever(int x, int y, std::future<void> exit_future);
multiplyForever(3, 5, fut);
You can fix it by passing the future by reference:
void multiplyForever(int x, int y, std::future<void>& exit_future);
multiplyForever(3, 5, fut);

std::string::reserve() and std::string::clear() conundrum

This question starts with a bit of code, just because I think it is easier to see what I am after:
/*static*/
void
Url::Split
(std::list<std::string> & url
, const std::string& stringUrl
)
{
std::string collector;
collector.reserve(stringUrl.length());
for (auto c : stringUrl)
{
if (PathSeparator == c)
{
url.push_back(collector);
collector.clear(); // Sabotages my optimization with reserve() above!
}
else
{
collector.push_back(c);
}
}
url.push_back(collector);
}
In the code above, the collector.reserve(stringUrl.length()); line is supposed to reduce the amount of heap operations performed during the loop below. Each substring cannot be longer than the whole url, after all and so reserving enough capacity as I do it looks like a good idea.
But, once a substring is finished and I add it to the url parts list, I need to reset the string to length 0 one way or another. Brief "peek definition" inspection suggests to me that at least on my platform, the reserved buffer will be released and with that, the purpose of my reserve() call is compromised.
Internally it calls some _Eos(0) in case of clear.
I could as well accomplish the same with collector.resize(0) but peeking definition reveals it also calls _Eos(newsize) internally, so the behavior is the same as in case of calling clear().
Now the question is, if there is a portable way to establish the intended optimization and which std::string function would help me with that.
Of course I could write collector[0] = '\0'; but that looks very off to me.
Side note: While I found similar questions, I do not think this is a duplicate of any of them.
Thanks, in advance.
In the C++11 standard clear is defined in terms of erase, which is defined as value replacement. There is no obvious guarantee that the buffer isn't deallocated. It might be there, implicit in other stuff, but I failed to find any such.
Without a formal guarantee that clear doesn't deallocate, and it appears that at least as of C++11 it isn't there, you have the following options:
Ignore the problem.
After all, chances are that the micro-seconds incurred by dynamic buffer allocation will be absolutely irrelevant, and in addition, even without a formal guarantee the chance of clear deallocating is very low.
Require a C++ implementation where clear doesn't deallocate.
(You can add an assert to this effect, checking .capacity().)
Do your own buffer implementation.
Ignoring the problem appears to be safe even where the allocations (if performed) would be time critical, because with common implementations clear does not reduce the capacity.
E.g., here with g++ and Visual C++ as examples:
#include <iostream>
#include <string>
using namespace std;
auto main() -> int
{
string s = "Blah blah blah";
cout << s.capacity();
s.clear();
cout << ' ' << s.capacity() << endl;
}
C:\my\so\0284>g++ keep_capacity.cpp -std=c++11
C:\my\so\0284>a
14 14
C:\my\so\0284>cl keep_capacity.cpp /Feb
keep_capacity.cpp
C:\my\so\0284>b
15 15
C:\my\so\0284>_
Doing your own buffer management, if you really want to take it that far, can be done as follows:
#include <iostream>
#include <string>
#include <vector>
namespace my {
using std::string;
using std::vector;
class Collector
{
private:
vector<char> buffer_;
int size_;
public:
auto str() const
-> string
{ return string( buffer_.begin(), buffer_.begin() + size_ ); }
auto size() const -> int { return size_; }
void append( const char c )
{
if( size_ < int( buffer_.size() ) )
{
buffer_[size_++] = c;
}
else
{
buffer_.push_back( c );
buffer_.resize( buffer_.capacity() );
++size_;
}
}
void clear() { size_ = 0; }
explicit Collector( const int initial_capacity = 0 )
: buffer_( initial_capacity )
, size_( 0 )
{ buffer_.resize( buffer_.capacity() ); }
};
auto split( const string& url, const char pathSeparator = '/' )
-> vector<string>
{
vector<string> result;
Collector collector( url.length() );
for( const auto c : url )
{
if( pathSeparator == c )
{
result.push_back( collector.str() );
collector.clear();
}
else
{
collector.append( c );
}
}
if( collector.size() > 0 ) { result.push_back( collector.str() ); }
return result;
}
} // namespace my
auto main() -> int
{
using namespace std;
auto const url = "http://en.wikipedia.org/wiki/Uniform_resource_locator";
for( string const& part : my::split( url ) )
{
cout << '[' << part << ']' << endl;
}
}

general tbb issue for calculating fibonacci numbers

I came across the tbb template below as an example of task-based programming for calculating the sum of fibonacci numbers in c++. But when I run it I get a value of 1717986912 which can't be the case. The output should be 3. What am I doing wrong?
class FibTask: public task
{
public:
const long n;
long * const sum;
FibTask( long n_, long* sum_ ) : n(n_), sum(sum_) {}
task* execute( )
{
// Overrides virtual function task::execute
if( n < 0)
{
return 0;
}
else
{
long x, y;
FibTask& a = *new( allocate_child( ) ) FibTask(n-1,&x);
FibTask& b = *new( allocate_child( ) ) FibTask(n-2,&y);
// Set ref_count to "two children plus one for the wait".
set_ref_count(3);
// Start b running.
spawn( b );
// Start a running and wait for all children (a and b).
spawn_and_wait_for_all( a );
// Do the sum
*sum = x+y;
}
return NULL;
}
long ParallelFib( long n )
{
long sum;
FibTask& a = *new(task::allocate_root( )) FibTask(n,&sum);
task::spawn_root_and_wait(a);
return sum;
}
};
long main(int argc, char** argv)
{
FibTask * obj = new FibTask(3,0);
long b = obj->ParallelFib(3);
std::cout << b;
return 0;
}
The cutoff is messed here. It must be 2 at least. E.g.:
if( n<2 ) {
*sum = n;
return NULL;
}
The original example also uses SerialFib as showed here http://www.threadingbuildingblocks.org/docs/help/tbb_userguide/Simple_Example_Fibonacci_Numbers.htm
The inefficient method for calculating Fibonacci numbers using inefficient blocking style technique will be even more inefficient without call to SerialFib().
WARNING: Please note that this example is intended just to demonstrate this particular low-level TBB API and this particular way of using it. It is not intended for reuse unless you are really sure why you are doing this.
Modern high-level API (though, still for the inefficient Fibonacci algorithm) would look like this:
int Fib(int n) {
if( n<CUTOFF ) { // 2 is minimum
return fibSerial(n);
} else {
int x, y;
tbb::parallel_invoke([&]{x=Fib(n-1);}, [&]{y=Fib(n-2);});
return x+y;
}
}

Lazy transform in C++

I have the following Python snippet that I would like to reproduce using C++:
from itertools import count, imap
source = count(1)
pipe1 = imap(lambda x: 2 * x, source)
pipe2 = imap(lambda x: x + 1, pipe1)
sink = imap(lambda x: 3 * x, pipe2)
for i in sink:
print i
I've heard of Boost Phoenix, but I couldn't find an example of a lazy transform behaving in the same way as Python's imap.
Edit: to clarify my question, the idea is not only to apply functions in sequence using a for, but rather to be able to use algorithms like std::transform on infinite generators. The way the functions are composed (in a more functional language like dialect) is also important, as the next step is function composition.
Update: thanks bradgonesurfing, David Brown, and Xeo for the amazing answers! I chose Xeo's because it's the most concise and it gets me right where I wanted to be, but David's was very important into getting the concepts through. Also, bradgonesurfing's tipped Boost::Range :).
Employing Boost.Range:
int main(){
auto map = boost::adaptors::transformed; // shorten the name
auto sink = generate(1) | map([](int x){ return 2*x; })
| map([](int x){ return x+1; })
| map([](int x){ return 3*x; });
for(auto i : sink)
std::cout << i << "\n";
}
Live example including the generate function.
I think the most idiomatic way to do this in C++ is with iterators. Here is a basic iterator class that takes an iterator and applies a function to its result:
template<class Iterator, class Function>
class LazyIterMap
{
private:
Iterator i;
Function f;
public:
LazyIterMap(Iterator i, Function f) : i(i), f(f) {}
decltype(f(*i)) operator* () { return f(*i); }
void operator++ () { ++i; }
};
template<class Iterator, class Function>
LazyIterMap<Iterator, Function> makeLazyIterMap(Iterator i, Function f)
{
return LazyIterMap<Iterator, Function>(i, f);
}
This is just a basic example and is still incomplete as it has no way to check if you've reached the end of the iterable sequence.
Here's a recreation of your example python code (also defining a simple infinite counter class).
#include <iostream>
class Counter
{
public:
Counter (int start) : value(start) {}
int operator* () { return value; }
void operator++ () { ++value; }
private:
int value;
};
int main(int argc, char const *argv[])
{
Counter source(0);
auto pipe1 = makeLazyIterMap(source, [](int n) { return 2 * n; });
auto pipe2 = makeLazyIterMap(pipe1, [](int n) { return n + 1; });
auto sink = makeLazyIterMap(pipe2, [](int n) { return 3 * n; });
for (int i = 0; i < 10; ++i, ++sink)
{
std::cout << *sink << std::endl;
}
}
Apart from the class definitions (which are just reproducing what the python library functions do), the code is about as long as the python version.
I think the boost::rangex library is what you are looking for. It should work nicely with the new c++lambda syntax.
int pipe1(int val) {
return 2*val;
}
int pipe2(int val) {
return val+1;
}
int sink(int val) {
return val*3;
}
for(int i=0; i < SOME_MAX; ++i)
{
cout << sink(pipe2(pipe1(i))) << endl;
}
I know, it's not quite what you were expecting, but it certainly evaluates at the time you want it to, although not with an iterator iterface. A very related article is this:
Component programming in D
Edit 6/Nov/12:
An alternative, still sticking to bare C++, is to use function pointers and construct your own piping for the above functions (vector of function pointers from SO q: How can I store function pointer in vector?):
typedef std::vector<int (*)(int)> funcVec;
int runPipe(funcVec funcs, int sinkVal) {
int running = sinkVal;
for(funcVec::iterator it = funcs.begin(); it != funcs.end(); ++it) {
running = (*(*it))(running); // not sure of the braces and asterisks here
}
return running;
}
This is intended to run through all the functions in a vector of such and return the resulting value. Then you can:
funcVec funcs;
funcs.pushback(&pipe1);
funcs.pushback(&pipe2);
funcs.pushback(&sink);
for(int i=0; i < SOME_MAX; ++i)
{
cout << runPipe(funcs, i) << endl;
}
Of course you could also construct a wrapper for that via a struct (I would use a closure if C++ did them...):
struct pipeWork {
funcVec funcs;
int run(int i);
};
int pipeWork::run(int i) {
//... guts as runPipe, or keep it separate and call:
return runPipe(funcs, i);
}
// later...
pipeWork kitchen;
kitchen.funcs = someFuncs;
int (*foo) = &kitchen.run();
cout << foo(5) << endl;
Or something like that. Caveat: No idea what this will do if the pointers are passed between threads.
Extra caveat: If you want to do this with varying function interfaces, you will end up having to have a load of void *(void *)(void *) functions so that they can take whatever and emit whatever, or lots of templating to fix the kind of pipe you have. I suppose ideally you'd construct different kinds of pipe for different interfaces between functions, so that a | b | c works even when they are passing different types between them. But I'm going to guess that that's largely what the Boost stuff is doing.
Depending on the simplicity of the functions :
#define pipe1(x) 2*x
#define pipe2(x) pipe1(x)+1
#define sink(x) pipe2(x)*3
int j = 1
while( ++j > 0 )
{
std::cout << sink(j) << std::endl;
}

C++ MACRO that will execute a block of code and a certain command after that block

void main()
{
int xyz = 123; // original value
{ // code block starts
xyz++;
if(xyz < 1000)
xyz = 1;
} // code block ends
int original_value = xyz; // should be 123
}
void main()
{
int xyz = 123; // original value
MACRO_NAME(xyz = 123) // the macro takes the code code that should be executed at the end of the block.
{ // code block starts
xyz++;
if(xyz < 1000)
xyz = 1;
} // code block ends << how to make the macro execute the "xyz = 123" statement?
int original_value = xyz; // should be 123
}
Only the first main() works.
I think the comments explain the issue.
It doesn't need to be a macro but to me it just sounds like a classical "macro-needed" case.
By the way, there's the BOOST_FOREACH macro/library and I think it does the exact same thing I'm trying to achieve but it's too complex for me to find the essence of what I need.
From its introductory manual page, an example:
#include <string>
#include <iostream>
#include <boost/foreach.hpp>
int main()
{
std::string hello( "Hello, world!" );
BOOST_FOREACH( char ch, hello )
{
std::cout << ch;
}
return 0;
}
The cleanest way to do this is probably to use an RAII container to reset the value:
// Assumes T's assignment does not throw
template <typename T> struct ResetValue
{
ResetValue(T& o, T v) : object_(o), value_(v) { }
~ResetValue() { object_ = value_; }
T& object_;
T value_;
};
used as:
{
ResetValue<int> resetter(xyz, 123);
// ...
}
When the block ends, the destructor will be called, resetting the object to the specified value.
If you really want to use a macro, as long as it is a relatively simple expression, you can do this using a for-block:
for (bool b = false; b == false; b = true, (xyz = 123))
{
// ...
}
which can be turned into a macro:
#define DO_AFTER_BLOCK(expr) \
for (bool DO_AFTER_BLOCK_FLAG = false; \
DO_AFTER_BLOCK_FLAG == false; \
DO_AFTER_BLOCK_FLAG = true, (expr))
used as:
DO_AFTER_BLOCK(xyz = 123)
{
// ...
}
I don't really think the macro approach is a good idea; I'd probably find it confusing were I to see this in production source code.
You don't absolutely need a macro - you could use inner scope variables:
#include <stdio.h>
int main(void)
{
int xyz = 123;
printf("xyz = %d\n", xyz);
{
int pqr = xyz;
int xyz = pqr;
printf("xyz = %d\n", xyz);
xyz++;
if (xyz < 1000)
xyz = 1;
printf("xyz = %d\n", xyz);
}
printf("xyz = %d\n", xyz);
return(0);
}
This produces the output:
xyz = 123
xyz = 123
xyz = 1
xyz = 123
If you compile with GCC and -Wshadow you get a warning; otherwise, it compiles clean.
You can't write int xyz = xyz; in the inner block reliably; once the '=' is parsed, the declaration is complete and so the initializer is the inner 'xyz', not the outer. The two step dance works, though.
The primary demerit of this is that it requires a modification in the code block.
If there are side-effects in the block - like the print statements above - you could call a function that contains the inner block. If there are no side-effects in the block, why are you executing it at all.
#include <stdio.h>
static void inner(int xyz)
{
printf("xyz = %d\n", xyz);
xyz++;
if (xyz < 1000)
xyz = 1;
printf("xyz = %d\n", xyz);
}
int main(void)
{
int xyz = 123;
printf("xyz = %d\n", xyz);
inner(xyz);
printf("xyz = %d\n", xyz);
return(0);
}
You can't make a macro perform a command after a loop unless you put the loop in the macro. And seriously? It would be a much better idea just to make a scoped variable.
template<typename T> class CallFunctionOnScopeExit {
T t;
public:
CallFunctionOnScopeExit(T tt) : t(tt) {}
~CallFunctionOnScopeExit() { t(); }
};
Guaranteed in the cases of exception, etc, whereas the macro version most definitely isn't. I would prefer to use this pattern for the exception guarantees, and because it's more flexible than just copying the int.