Auctions in Boost ASIO - c++

I'm implementing an auctioning system in C++ with Boost.Asio. There is a single centralized auctioneer (the server) and some connecting bidders (the clients). I am implementing this in an asynchronous fashion, and I have implemented the basic communication between the bidder and auctioneer (register, ping, get client list). The skeletal code for the auctioneer would look like follows:
class talkToBidder : public boost::enable_shared_from_this<talkToBidder>
{
// Code for sending and receiving messages, which works fine
};
void on_round_end()
{
// Choose the best bid and message the winner
if (!itemList.empty())
timer_reset();
}
void timer_reset()
{
// Send the item information to the bidders
// When the round ends, call on_round_end()
auction_timer.expires_from_now(boost::posix_time::millisec(ROUND_TIME));
auction_timer.async_wait(boost::bind(on_round_end));
}
void handle_accept(...)
{
// Create new bidder...
acceptor.async_accept(bidder->sock(),boost::bind(handle_accept,bidder,_1));
}
int main()
{
// Create new bidder and handle accepting it
talkToBidder::ptr bidder = talkToBidder::new_();
acceptor.async_accept(bidder->sock(),boost::bind(handle_accept,bidder,_1));
service.run();
}
My issue is, I need to wait for at least one bidder to connect before I can start the auction, so I cannot simply call timer_reset() before I use service.run(). What is the Boost.Asio way to go about doing this?

In asynchronous protocol design, it helps to draw Message Sequence Diagrams. Do include your timers.
The code now becomes trivial. You start your timer when the message arrives that should start your timer. Yes, this is shifting the problem a bit forwards. The real point here is that it's not a Boost Asio coding problem. In your case, that particular message appears to be the login of the first bidder, implemented as a TCP connect (SYN/ACK) which maps to handle_accept in your code.

Related

Boost Asio, async_read/connect timeout

In boost website, there is a good example about timeout of async operations. However, in that example, the socket is closed to cancel operations. There is also socket::cancel(), but in both documentation and as a compiler warning, it is stated as problematic in terms of portability.
Among the stack of Boost.Asio timeout questions in SO, there are several kind of answers. The first one probably is introducing a custom event loop, i.e., loop io_service::run_one() and cancel the event loop on deadline. I am using io_service::run() in a worker thread. That's not the kind of solution I would like to employ, if possible, as I do not want to change my code base.
A second option is directly changing the options of native socket. However, I would like to stick to Boost.Asio if possible and avoid any sort of platform-specific code as much as possible.
The example in the documentation is for an old version of Boost.Asio, but it's working properly, other than being forced to close the socket to cancel the operations. Using the documentation example, I have the following
void check_deadline(const boost::system::error_code &ec)
{
if(!running) {
return;
}
if(timer.expires_at() <= boost::asio::deadline_timer::traits_type::now()) {
// cancel all operations
boost::system::error_code errorcode;
boost::asio::ip::tcp::endpoint endpoint = socket.remote_endpoint();
socket.close(errorcode);
if(errorcode) {
SLOGERROR(mutex, errorcode.message(), "check_deadline()");
}
else {
SLOG(mutex, "timed out", "check_deadline()");
// connect again
Connect(endpoint);
if(errorcode) {
SLOGERROR(mutex, errorcode.message(), "check_deadline()");
}
}
// set timer to infinity, so that it won't expire
// until a proper deadline is set
timer.expires_at(boost::posix_time::pos_infin);
}
// keep waiting
timer.async_wait(std::bind(&TCPClient::check_deadline, this, std::placeholders::_1));
}
This is the only callback function registered to async_wait.The very first solution I could come up is reconnecting after closing the socket. Now my question is, is there a better way? By better way, I mean canceling the operations based on a timer without actually disrupting (i.e., not closing the socket) the connection.

ZMQ C++ Event Loop Within Class

My overall goal in using ZMQ is to avoid having to get into the weeds of asynchronous message passing; and ZMQ seemed like a portable and practical solution. Most of the ZeroMQ docs, however, like this, and many of the other zmq examples I have Googled upon are based on the helloworld.c format. That is, they are all simple procedural code inside int main(){}.
My problem is that I want to "embed" a zmq "listener" inside a c++ singleton-like class. I want to "listen" for messages and then process them. I'm planning on using zmq's PUSH -> PULL sockets, on the off chance that matters. What I cannot figure out how to do is to have in internal "event loop".
class foomgr {
public:
static foomgr& get_foomgr();
// ...
private:
foomgr();
foomgr(const &foomgr);
// ...
listener_() {
// EVENT LOOP HERE
// RECV and PROCESS ZMQ MSGS
// while(true) DOES NOT WORK HERE
}
// ...
zmq::context_t zmqcntx_;
zmq::socket_t zmqsock_;
const int zmqsock_linger_ = 1000;
// ....
}
I obviously cannot use the while(true) construct in listener, since wherever I call it from will block. Since one of the advantages of using ZMQ is that I do not have to manage "listener" threads myself, it seems silly to have to figure out how create my own thread to wrap listener_ in. I'm lost for solutions.
Note: I'm a c++ newb, so what might be obvious to most is not to me. Also, I'm trying to use generic "words", not library or language specific to avoid confusion. The code is built with -std=c++11, so those
constructs are fine.
The ZMQ C++ library does not implement a listener pattern for message polling. It leaves that task up to you to wrap in your own classes. It does support a non-blocking mode of polling for new messages, however.
So using the right code you can wrap it up in a small loop in a non-blocking fashion.
See this Polling Example here on GitHub written in C++. Note that its polling from 2 sockets, so you'll need to modify it a little to remove the extra code.
The important part that you'll need to wrap inside your own observer implementation is below:
zmq::message_t message;
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
receiver.recv(&message);
// Process task
}
Zmq is not thread safe by design (versions up to now). In fact, Zmq stresses:
Do not use or close sockets except in the thread that created them.
PERIOD.
Callbacks shouldn't be used because the thread calling the callback, will be for sure different from the thread that created the socket, which is forbidden.
Maybe, you will find useful zmqHelper, a small library (only two classes and a few functions), to make it easier using Zmq in C++ and to enforce (it is guaranteed) that threads can't share sockets.
In the example sections, you will find how to do the most frequent tasks.
Hope it helps.
Code snippet: polling using zmqHelper in a ROUTER-DEALER broker.
zmq::context_t theContext {1}; // 1 thread in the socket
SocketAdaptor< ZMQ_ROUTER > frontend_ROUTER {theContext};
SocketAdaptor< ZMQ_DEALER > backend_DEALER {theContext};
frontend_ROUTER.bind ("tcp://*:8000");
backend_DEALER.bind ("tcp://*:8001");
while (true) {
std::vector<std::string> lines;
//
// wait (blocking poll) for data in any socket
//
std::vector< zmqHelper::ZmqSocketType * > list
= { frontend_ROUTER.getZmqSocket(), backend_DEALER.getZmqSocket() };
zmqHelper::ZmqSocketType * from = zmqHelper::waitForDataInSockets ( list );
//
// there is data, where is it from?
//
if ( from == frontend_ROUTER.getZmqSocket() ) {
// from frontend, read ...
frontend_ROUTER.receiveText (lines);
// ... and resend
backend_DEALER.sendText( lines );
}
else if ( from == backend_DEALER.getZmqSocket() ) {
// from backend, read ...
backend_DEALER.receiveText (lines);
// ... and resend
frontend_ROUTER.sendText( lines );
}
else if ( from == nullptr ) {
std::cerr << "Error in poll ?\n";
}
} // while (true)

understanding RProperty IPC communication

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}

boost asio: maintaining a list of connected clients

I'm looking for the best way to modify the Boost Asio HTTP Server 3 example to maintain a list of the currently connected clients.
If I modify server.hpp from the example as:
class server : private boost::noncopyable
{
public:
typedef std::vector< connection_ptr > ConnectionList;
// ...
ConnectionList::const_iterator GetClientList() const
{
return connection_list_.begin();
};
void handle_accept(const boost::system::error_code& e)
{
if (!e)
{
connection_list_.push_back( new_connection_ );
new_connection_->start();
// ...
}
}
private:
ConnectionList connection_list_;
};
Then I mess up the lifetime of the connection object such that it doesn't go out of scope and disconnect from the client because it still has a reference maintained in the ConnectionList.
If instead my ConnectionList is defined as typedef std::vector< boost::weak_ptr< connection > > ConnectionList; then I run the risk of the client disconnecting and nullifying its pointer while somebody is using it from GetClientList().
Anybody have a suggestion on a good & safe way to do this?
Thanks,
PaulH
HTTP is stateless. That means it's difficult to even define what "currently connected client" means, not to mention keeping track of which clients are at any given time. The only time there's really a "current client" is from the time a request is received to the time that request is serviced (often only a few milliseconds). A connection is not maintained even for the duration of downloading one page -- rather, each item on the page is requested and sent separately.
The typical method for handling this is to use a fairly simple timeout -- a client is considered "connected" for some arbitrary length of time (a few minutes) after they send in a request. A cookie of some sort is used to identify the client sending in a particular request.
The rest of what you're talking about is just a matter of making sure the collection you use to hold connection information is thread safe. You have one thread that adds connections, one thread that deletes them, and N threads that use the data currently in the list. The standard collections don't guarantee any thread safety, but there are others around that do.

Simple C/C++ network I/O library

I have the following problem to solve. I want to make a number of requests to a number of "remote" servers (actually, a server farm we control). The connection is very simple. Send a line, and then read lines back. Because of the number of requests and the number of servers, I use pthreads, one for each request.
The naive approach, using blocking sockets, does not work; very occasionally, I'll have a thread stuck in 'connect'. I cannot use SIGALRM because I am using pthreads. I tried converting the code to O_NONBLOCK but this vastly complicated the code to read single lines.
What are my options? I'm looking for the simplest solution that allows the following pseudocode:
// Inside a pthread
try {
req = connect(host, port);
req.writeln("request command");
while (line = req.readline()) {
// Process line
}
} catch TimeoutError {
// Bitch and complain
}
My code is in C++ and I'm using Boost. A quick look at Boost ASIO shows me that it probably isn't the correct approach, but I could be wrong. ACE is far, far too heavy-weight to solve this problem.
Have you looked at libevent?
http://www.monkey.org/~provos/libevent/
It's totally different paradigm but the performance is so amazing.
memcached is built on top of libevent.
I saw the comments and i think you can use boost::asio with boost::asio::deadline_timer
Fragment of a code:
void restart_timer()
{
timer_.cancel();
timer_.expires_from_now(boost::posix_time::seconds(5));
timer_.async_wait(boost::bind(&handleTimeout,
MyClass::shared_from_this(), boost::asio::placeholders::error));
}
Where handleTimeout is a callback function, timer_ is boost::asio::deadline_timer
and MyClass is similar to
class Y: public enable_shared_from_this<Y>
{
public:
shared_ptr<Y> f()
{
return shared_from_this();
}
}
You can call restart_timer before connect ou read/write
More information about share_from_this()
You mentioned this happens 'very occasionally'. Your 'connect' side should have the fault tolerance and error handling you are looking for but you should also consider the stability of your servers, DNS, network connections, etc.
The underlying protocols are very sturdy and work very well, so if you are experiencing these kind of problems that often then it might be worth checking.
You may also be able close the socket from the other thread. That should cause the connect to fail.