Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I write a program to capture the packets using Library "Winpcap". ( How to complie WinPcap with VS2010? Please see this link: http://www.rhyous.com/2011/11/12/how-to-compile-winpcap-with-visual-studio-2010/)
I want to capture the packet(using pcap_loop()) and at the same time I process the packet in the same function void D. The code int x = 1; in function void D is only an easy example which represents some functions that can process the packet. The thread t calls void D. The thread tt calls the function startCap() which captures the packet.
I want to emphasize my question: I debug the program and it stops at the position of pcap_loop() in thread tt until pcap_loop() finishes. I set the parameter of pcap_loop() to make it to run without end because I need continious capture of packet. The result is that the program will not go to the next step int x = 1;. I want to run int x = 1; as thread tt runs. In a word, I wish that the both threads t and tt run at the same time. But the program runs only on thread tt without jumping out and run on thread t.
PS 1: int x = 1; keeps staying better in void D.
PS 2: There's no erros in compiling and debuging. But the program falls into the function pcap_loop which is in the thread tt.
Did I make my question clear?
The whole code:
Data.h
#ifndef DATA_H
#define DATA_H
#include <pcap/pcap.h>
#include <vector>
using namespace std;
struct PcapDevice
{
string name;
string description;
};
class Data
{
public:
Data();
~Data();
bool findDevices(vector<PcapDevice> &deviceList );
bool openDevices(const char * device);
void processPacket();
void startCap();
private:
pcap_t* inData;
char errbuf[256];
bool isCapturing;
pcap_if_t * m_pDevs;
};
#endif // DATA_H
Data.cpp
#include "Data.h"
#include <iostream>
// define pcap callback
void capture_callback_handler(unsigned char *userData, const struct pcap_pkthdr* pkthdr, const unsigned char * packet)
{
((Data*) userData)->processPacket();
}
Data::Data()
{
memset(errbuf,0,PCAP_ERRBUF_SIZE);
isCapturing = false;
inData = NULL;
m_pDevs = NULL;
}
Data::~Data()
{
}
// process the packet
void Data::processPacket()
{
return ;
}
// find local adapter
bool Data::findDevices(vector<PcapDevice> &deviceList )
{
m_pDevs = NULL;
int res = pcap_findalldevs(&m_pDevs, errbuf);
if(0 == res)
{
pcap_if_t * pIter = m_pDevs;
while(pIter != NULL)
{
PcapDevice device;
device.description = pIter->description;
device.name = pIter->name;
deviceList.push_back(device);
pIter = pIter->next;
}
return true;
}
else
{
printf("PCAP: no devices found\n");
}
return false;
}
// open the adapter
bool Data::openDevices(const char *device)
{
if ( (inData = pcap_open_live(device, 8192, 1, 512, errbuf)) == NULL)
{
return false;
}
return true;
}
// start the process of capturing the packet
void Data::startCap()
{
if ( inData == NULL ){
fprintf(stderr, "ERROR: no source set\n" );
return;
}
// free the list of device(adapter)
pcap_freealldevs(m_pDevs);
Data* data = this;
isCapturing = true;
// capture in the loop
if ( pcap_loop(inData, -1, capture_callback_handler, (unsigned char *) data) == -1)
{
fprintf(stderr, "ERROR: %s\n", pcap_geterr(inData) );
isCapturing = false;
}
}
main.cpp
#include <WinSock2.h>
#include <Windows.h>
#include <time.h>
#include "Data.h"
#include <boost\thread\thread.hpp>
#include <boost\function.hpp>
struct parameter
{
Data* pData;
};
void D(void* pParam)
{
// init the parameter
parameter* pUserParams = (parameter*)pParam;
boost::function<void()> f;
// the capture thread will be started
f = boost::bind(&Data::startCap, pUserParams->pData);
boost::thread tt(f);
tt.join();
// I want to work on the packet at the same time, the code "int x=1" is only an easy example
// and it represents a series of functions that can process the packet. I want to run those function as the thread tt runs.
int x = 1;
}
void main()
{
Data oData;
parameter pPara ;
pPara.pData = &oData;
std::vector<PcapDevice> DevList;
oData.findDevices(DevList);
int num = DevList.size()-1;
oData.openDevices(DevList[num].name.c_str());
boost::thread t(D,(void*)&pPara);
t.join();
}
Calling tt.join() will wait until the thread finishes (that is, startCap() returns) before executing the next statement.
You can simply put your int x = 1; before the join(); however, the thread may have not have even started at that point. If you want to ensure the thread is running, or up to a certain point before processing int x = 1; you can use a condition_variable:
The condition_variable class is a synchronization primitive that can be used to block a thread, or multiple threads at the same time, until:
a notification is received from another thread
void Data::startCap(std::condition_variable& cv)
{
if ( inData == NULL ){
fprintf(stderr, "ERROR: no source set\n" );
return;
}
// free the list of device(adapter)
pcap_freealldevs(m_pDevs);
Data* data = this;
isCapturing = true;
// Notify others that we are ready to begin capturing packets
cv.notify_one();
// capture in the loop
if ( pcap_loop(inData, -1, capture_callback_handler, (unsigned char *) data) == -1)
{
fprintf(stderr, "ERROR: %s\n", pcap_geterr(inData) );
isCapturing = false;
}
}
void D(void* pParam)
{
// init the parameter
parameter* pUserParams = (parameter*)pParam;
// Create conditional_variable
std::conditional_variable cv;
// Pass the conditional_variable by reference to the thread
boost::thread tt(&Data::startCap, pUserParams->pData, std::ref(cv));
// Wait until the thread notifies us it's ready:
cv.wait();
// Process packets etc.
int x = 1;
// Wait for the thread to finish
tt.join();
}
Now D() will start the thread tt and then wait until startCaps has reached a certain point (where it calls notify_one()) before we continuing doing things.
Related
I've used the program answered in this link with some modifications.
Below is my modified code:
#include <linux/netlink.h>
#include <netlink/netlink.h>
#include <netlink/route/qdisc.h>
#include <netlink/route/qdisc/plug.h>
#include <netlink/socket.h>
#include <atomic>
#include <csignal>
#include <iostream>
#include <stdexcept>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <thread>
#include <queue>
#include <chrono>
/**
* Netlink route socket.
*/
struct Socket {
Socket() : handle{nl_socket_alloc()} {
if (handle == nullptr) {
throw std::runtime_error{"Failed to allocate socket!"};
}
if (int err = nl_connect(handle, NETLINK_ROUTE); err < 0) {
throw std::runtime_error{"Unable to connect netlink socket: " +
std::string{nl_geterror(err)}};
}
}
Socket(const Socket &) = delete;
Socket &operator=(const Socket &) = delete;
Socket(Socket &&) = delete;
Socket &operator=(Socket &&) = delete;
~Socket() { nl_socket_free(handle); }
struct nl_sock *handle;
};
/**
* Read all links from netlink socket.
*/
struct LinkCache {
explicit LinkCache(Socket *socket) : handle{nullptr} {
if (int err = rtnl_link_alloc_cache(socket->handle, AF_UNSPEC, &handle);
err < 0) {
throw std::runtime_error{"Unable to allocate link cache: " +
std::string{nl_geterror(err)}};
}
}
LinkCache(const LinkCache &) = delete;
LinkCache &operator=(const LinkCache &) = delete;
LinkCache(LinkCache &&) = delete;
LinkCache &operator=(LinkCache &&) = delete;
~LinkCache() { nl_cache_free(handle); }
struct nl_cache *handle;
};
/**
* Link (such as "eth0" or "wlan0").
*/
struct Link {
Link(LinkCache *link_cache, const std::string &iface)
: handle{rtnl_link_get_by_name(link_cache->handle, iface.c_str())} {
if (handle == nullptr) {
throw std::runtime_error{"Link does not exist:" + iface};
}
}
Link(const Link &) = delete;
Link &operator=(const Link &) = delete;
Link(Link &&) = delete;
Link &operator=(Link &&) = delete;
~Link() { rtnl_link_put(handle); }
struct rtnl_link *handle;
};
/**
* Queuing discipline.
*/
struct QDisc {
QDisc(const std::string &iface, const std::string &kind)
: handle{rtnl_qdisc_alloc()} {
if (handle == nullptr) {
throw std::runtime_error{"Failed to allocate qdisc!"};
}
struct rtnl_tc *tc = TC_CAST(handle);
// Set link
LinkCache link_cache{&socket};
Link link{&link_cache, iface};
rtnl_tc_set_link(tc, link.handle);
// Set parent qdisc
uint32_t parent = 0;
if (int err = rtnl_tc_str2handle("root", &parent); err < 0) {
throw std::runtime_error{"Unable to parse handle: " +
std::string{nl_geterror(err)}};
}
rtnl_tc_set_parent(tc, parent);
// Set kind (e.g. "plug")
if (int err = rtnl_tc_set_kind(tc, kind.c_str()); err < 0) {
throw std::runtime_error{"Unable to set kind: " +
std::string{nl_geterror(err)}};
}
}
QDisc(const QDisc &) = delete;
QDisc &operator=(const QDisc &) = delete;
QDisc(QDisc &&) = delete;
QDisc &operator=(QDisc &&) = delete;
~QDisc() {
if (int err = rtnl_qdisc_delete(socket.handle, handle); err < 0) {
std::cerr << "Unable to delete qdisc: " << nl_geterror(err) << std::endl;
}
rtnl_qdisc_put(handle);
}
void send_msg() {
int flags = NLM_F_CREATE;
if (int err = rtnl_qdisc_add(socket.handle, handle, flags); err < 0) {
throw std::runtime_error{"Unable to add qdisc: " +
std::string{nl_geterror(err)}};
}
}
Socket socket;
struct rtnl_qdisc *handle;
};
/**
* Queuing discipline for plugging traffic.
*/
class Plug {
public:
Plug(const std::string &iface, uint32_t limit, std::string msg)
: qdisc_{iface, "plug"} {
rtnl_qdisc_plug_set_limit(qdisc_.handle, limit);
qdisc_.send_msg();
// set_enabled(enabled_);
set_msg(msg);
}
// void set_enabled(bool enabled) {
// if (enabled) {
// rtnl_qdisc_plug_buffer(qdisc_.handle);
// } else {
// rtnl_qdisc_plug_release_one(qdisc_.handle);
// }
// qdisc_.send_msg();
// enabled_ = enabled;
// }
void set_msg(std::string msg) {
if (msg == "buffer") {
int ret = rtnl_qdisc_plug_buffer(qdisc_.handle);
//std::cout<<strerror(ret);
} else if(msg == "commit") {
int ret = rtnl_qdisc_plug_release_one(qdisc_.handle);
//std::cout<<strerror(ret);
} else {
int ret = rtnl_qdisc_plug_release_indefinite(qdisc_.handle);
//std::cout<<strerror(ret);
}
qdisc_.send_msg();
}
// bool is_enabled() const { return enabled_; }
private:
QDisc qdisc_;
// bool enabled_;
};
std::atomic<bool> quit{false};
void exit_handler(int /*signal*/) { quit = true; }
// this function busy wait on job queue until there's something
//and calls release operation i.e. unplug qdisc to release output packets
//generated for a particular epoch
void transmit_ckpnt(std::queue<int> &job_queue, Plug &plug){
while(true){
while(!job_queue.empty()){
int id = job_queue.front();
job_queue.pop();
std::string s = std::to_string(id);
std::cout<<"called from parallel thread "<<s<<"\n";
//release buffer
plug.set_msg("commit");
}
}
}
int main() {
std::string iface{"veth-host"};
constexpr uint32_t buffer_size = 10485760;
// bool enabled = true;
Plug plug{iface, buffer_size, "buffer"};
/**
* Set custom exit handler to ensure destructor runs to delete qdisc.
*/
struct sigaction sa {};
sa.sa_handler = exit_handler;
sigfillset(&sa.sa_mask);
sigaction(SIGINT, &sa, nullptr);
pid_t wpid;
int status = 0;
std::queue<int> job_queue;
int ckpnt_no = 1;
std::thread td(transmit_ckpnt, std::ref(job_queue), std::ref(plug));
plug.set_msg("indefinite");
while(true){
//plug the buffer at start of the epoch
plug.set_msg("buffer");
//wait for completion of epoch
sleep(4);
job_queue.push(ckpnt_no);
ckpnt_no += 1;
}
plug.set_msg("indefinite");
td.join();
// while (!quit) {
// std::cout << "Plug set to " << plug.is_enabled() << std::endl;
// std::cout << "Press <Enter> to continue.";
// std::cin.get();
// plug.set_enabled(!plug.is_enabled());
// }
return EXIT_SUCCESS;
}
Walkthrough of code: This program creates a plug/unplug type of qdiscs in which during plug operation, the network packets are buffered and during an unplug operation the network packets are released from the first plug(the front of the queuing discipline qdisc) to the second plug in the qdisc. The above program is working correctly if the plug and unplug operation are there alternately. But i want to use it in the way that it was built for, i.e like how this is mentioned in this link , i.e.
TCQ_PLUG_BUFFER (epoch i)
TCQ_PLUG_BUFFER (epoch i+1)
TCQ_PLUG_RELEASE_ONE (for epoch i)
TCQ_PLUG_BUFFER (epoch i+2)
..............................so on
In my program, the main thread start buffering in the begining of every epoch, and continued execution. The job thread takes the job id from job queue and releases the buffered packets from the head of the queue to next plug. But this gives below error:
./a.out: /lib/x86_64-linux-gnu/libnl-3.so.200: no version information available (required by ./a.out)
./a.out /usr/lib/x86_64-linux-gnu/libnl-route-3.so.200: no version information available (required by ./a.out)
called from parallel thread 1
called from parallel thread 2
called from parallel thread 3
called from parallel thread 4
called from parallel thread 5
called from parallel thread 6
called from parallel thread 7
terminate called after throwing an instance of 'std::runtime_error'
what(): Unable to add qdisc: Message sequence number mismatch
Aborted
Unable to understand what is this and why it is giving this error, when release was performed in sequentially in the main thread then it is working but now when there is another thread to perform the release operation which just checks whether the job_queue is empty or not and perform the release operation until there is something inside job queue and busy wait if job_queue is empty.
The expected sequence counter is stored by libnl as part of the nl_sock struct (reference). When multiple threads call libnl functions, this can cause inconsistencies, such as a data race (two threads writing to the sequence counter at same time) or a race condition (time-of-check-time-of-use problem, where one thread checks if the counter satisfies some condition, then performs some operation, but in between the other thread modifies the counter). See here for more details on data races and race conditions.
Sidenote: Both g++ and clang++ support the -fsanitize=thread flag, which automatically inserts additional debug code into the binary that can help to detect this kind of data races (reference). Though in this case, it might not be as useful, since you would also have to get libnl compiled with this flag, which might not be easy.
From the libnl documentation (reference):
The next step is to check the sequence number of the message against
the currently expected sequence number. The application may provide
its own sequence number checking algorithm by setting the callback
function NL_CB_SEQ_CHECK to its own implementation. In fact, calling
nl_socket_disable_seq_check() to disable sequence number checking will
do nothing more than set the NL_CB_SEQ_CHECK hook to a function which
always returns NL_OK.
This leaves us with the following options:
Use a mutex to guard all access to libnl functions that might modify the sequence counter.
Disable sequence counter checking using nl_socket_disable_seq_check.
From my point of view, 1) is the more robust solution. If you care more about performance than robustness, then you could go with 2).
Option 1: Use a mutex to guard access to libnl functions
Include the mutex header from the standard library:
#include <mutex>
In the Plug class, add a std::mutex as a member:
class Plug {
...
private:
std::mutex seq_counter_mutex_;
...
};
At the beginning of set_msg, use a std::lock_guard for acquiring the mutex for the duration of the function. This ensures that only one thread can enter the function at the same time:
void set_msg(std::string msg) {
std::lock_guard guard{seq_counter_mutex_};
...
}
Option 2: Disable sequence number checking
In the Socket class, at the end of the constructor you can disable sequence counter checking with:
nl_socket_disable_seq_check(handle);
The Question
What's a good way to increment a counter and signal once that counter reaches a given value (i.e., signaling a function waiting on blocks until full, below)? It's a lot like asking for a semaphore. The involved processes are communicating via shared memory (/dev/shm), and I'm currently trying to avoid using a library (like Boost).
Initial Solution
Declare a struct that contains a SignalingIncrementingCounter. This struct is allocated in shared memory, and a single process sets up the shared memory with this struct before the other processes begin. The SignalingIncrementingCounter contains the following three fields:
A plain old int to represent the counter's value.
Note: Due to the MESI caching protocol, we are guaranteed that if one cpu core modifies the value, that the updated value will be reflected in other caches once the value is read from those other caches.
A pthread mutex to guard the reading and incrementing of the integer counter
A pthread condition variable to signal when the integer has reached a desirable value
Other Solutions
Instead of using an int, I also tried using std::atomic<int>. I've tried just defining this field as a member of the SignalingIncrementingCounter class, and I've also tried allocating it into the struct at run time with placement new. It seems that neither worked better than the int.
The following should work.
The Implementation
I include most of the code, but I leave out parts of it for the sake of brevity.
signaling_incrementing_counter.h
#include <atomic>
struct SignalingIncrementingCounter {
public:
void init(const int upper_limit_);
void reset_to_empty();
void increment(); // only valid when counting up
void block_until_full(const char * comment = {""});
private:
int upper_limit;
volatile int value;
pthread_mutex_t mutex;
pthread_cond_t cv;
};
signaling_incrementing_counter.cpp
#include <pthread.h>
#include <stdexcept>
#include "signaling_incrementing_counter.h"
void SignalingIncrementingCounter::init(const int upper_limit_) {
upper_limit = upper_limit_;
{
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
int retval = pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);
if (retval) {
throw std::runtime_error("Error while setting sharedp field for mutex");
}
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ERRORCHECK);
pthread_mutex_init(&mutex, &attr);
pthread_mutexattr_destroy(&attr);
}
{
pthread_condattr_t attr;
pthread_condattr_init(&attr);
pthread_condattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);
pthread_cond_init(&cv, &attr);
pthread_condattr_destroy(&attr);
}
value = 0;
}
void SignalingIncrementingCounter::reset_to_empty() {
pthread_mutex_lock(&mutex);
value = 0;
// No need to signal, because in my use-case, there is no function that unblocks when the value changes to 0
pthread_mutex_unlock(&mutex);
}
void SignalingIncrementingCounter::increment() {
pthread_mutex_lock(&mutex);
fprintf(stderr, "incrementing\n");
++value;
if (value >= upper_limit) {
pthread_cond_broadcast(&cv);
}
pthread_mutex_unlock(&mutex);
}
void SignalingIncrementingCounter::block_until_full(const char * comment) {
struct timespec max_wait = {0, 0};
pthread_mutex_lock(&mutex);
while (value < upper_limit) {
int val = value;
printf("blocking until full, value is %i, for %s\n", val, comment);
clock_gettime(CLOCK_REALTIME, &max_wait);
max_wait.tv_sec += 5; // wait 5 seconds
const int timed_wait_rv = pthread_cond_timedwait(&cv, &mutex, &max_wait);
if (timed_wait_rv)
{
switch(timed_wait_rv) {
case ETIMEDOUT:
break;
default:
throw std::runtime_error("Unexpected error encountered. Investigate.");
}
}
}
pthread_mutex_unlock(&mutex);
}
Using either an int or std::atomic works.
One of the great things about the std::atomic interface is that it plays quite nicely with the int "interface". So, the code is almost exactly the same. One can switch between each implementation below by adding a #define USE_INT_IN_SHARED_MEMORY_FOR_SIGNALING_COUNTER true.
I'm not so sure about statically creating the std::atomic in shared memory, so I use placement new to allocate it. My guess is that relying on the static allocation would work, but it may technically be undefined behavior. Figuring that out is beyond the scope of my question, but a comment on that topic would be quite welcome.
signaling_incrementing_counter.h
#include <atomic>
#include "gpu_base_constants.h"
struct SignalingIncrementingCounter {
public:
/**
* We will either count up or count down to the given limit. Once the limit is reached, whatever is waiting on this counter will be signaled and allowed to proceed.
*/
void init(const int upper_limit_);
void reset_to_empty();
void increment(); // only valid when counting up
void block_until_full(const char * comment = {""});
// We don't have a use-case for the block_until_non_full
private:
int upper_limit;
#if USE_INT_IN_SHARED_MEMORY_FOR_SIGNALING_COUNTER
volatile int value;
#else // USE_INT_IN_SHARED_MEMORY_FOR_SIGNALING_COUNTER
std::atomic<int> value;
std::atomic<int> * value_ptr;
#endif // USE_INT_IN_SHARED_MEMORY_FOR_SIGNALING_COUNTER
pthread_mutex_t mutex;
pthread_cond_t cv;
};
signaling_incrementing_counter.cpp
#include <pthread.h>
#include <stdexcept>
#include "signaling_incrementing_counter.h"
void SignalingIncrementingCounter::init(const int upper_limit_) {
upper_limit = upper_limit_;
#if !GPU_USE_INT_IN_SHARED_MEMORY_FOR_SIGNALING_COUNTER
value_ptr = new(&value) std::atomic<int>(0);
#endif // GPU_USE_INT_IN_SHARED_MEMORY_FOR_SIGNALING_COUNTER
{
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
int retval = pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);
if (retval) {
throw std::runtime_error("Error while setting sharedp field for mutex");
}
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ERRORCHECK);
pthread_mutex_init(&mutex, &attr);
pthread_mutexattr_destroy(&attr);
}
{
pthread_condattr_t attr;
pthread_condattr_init(&attr);
pthread_condattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);
pthread_cond_init(&cv, &attr);
pthread_condattr_destroy(&attr);
}
reset_to_empty(); // should be done at end, since mutex functions are called
}
void SignalingIncrementingCounter::reset_to_empty() {
int mutex_rv = pthread_mutex_lock(&mutex);
if (mutex_rv) {
throw std::runtime_error("Unexpected error encountered while grabbing lock. Investigate.");
}
value = 0;
// No need to signal, because there is no function that unblocks when the value changes to 0
pthread_mutex_unlock(&mutex);
}
void SignalingIncrementingCounter::increment() {
fprintf(stderr, "incrementing\n");
int mutex_rv = pthread_mutex_lock(&mutex);
if (mutex_rv) {
throw std::runtime_error("Unexpected error encountered while grabbing lock. Investigate.");
}
++value;
fprintf(stderr, "incremented\n");
if (value >= upper_limit) {
pthread_cond_broadcast(&cv);
}
pthread_mutex_unlock(&mutex);
}
void SignalingIncrementingCounter::block_until_full(const char * comment) {
struct timespec max_wait = {0, 0};
int mutex_rv = pthread_mutex_lock(&mutex);
if (mutex_rv) {
throw std::runtime_error("Unexpected error encountered while grabbing lock. Investigate.");
}
while (value < upper_limit) {
int val = value;
printf("blocking during increment until full, value is %i, for %s\n", val, comment);
/*const int gettime_rv =*/ clock_gettime(CLOCK_REALTIME, &max_wait);
max_wait.tv_sec += 5;
const int timed_wait_rv = pthread_cond_timedwait(&cv, &mutex, &max_wait);
if (timed_wait_rv)
{
switch(timed_wait_rv) {
case ETIMEDOUT:
break;
default:
pthread_mutex_unlock(&mutex);
throw std::runtime_error("Unexpected error encountered. Investigate.");
}
}
}
pthread_mutex_unlock(&mutex);
}
I am reading a message from a socket with C++ code and am trying to plot it interactively with matplotlib, but it seems Python code will block the main thread, no matter I use show() or ion() and draw(). ion() and draw() won't block in Python.
Any idea how to plot interactively with matplotlib in C++ code?
An example would be really good.
Thanks a lot.
You may also try creating a new thread that does the call to the
blocking function, so that it does not block IO in your main program
loop. Use an array of thread objects and loop through to find an unused
one, create a thread to do the blocking calls, and have another thread
that joins them when they are completed.
This code is a quick slap-together I did to demonstrate what I mean about
using threads to get pseudo asynchronous behavior for blocking functions...
I have not compiled it or combed over it very well, it is simply to show
you how to accomplish this.
#include <pthread.h>
#include <sys/types.h>
#include <string>
#include <memory.h>
#include <malloc.h>
#define MAX_THREADS 256 // Make this as low as possible!
using namespace std;
pthread_t PTHREAD_NULL;
typedef string someTypeOrStruct;
class MyClass
{
typedef struct
{
int id;
MyClass *obj;
someTypeOrStruct input;
} thread_data;
void draw(); //Undefined in this example
bool getInput(someTypeOrStruct *); //Undefined in this example
int AsyncDraw(MyClass * obj, someTypeOrStruct &input);
static void * Joiner(MyClass * obj);
static void * DoDraw(thread_data *arg);
pthread_t thread[MAX_THREADS], JoinThread;
bool threadRunning[MAX_THREADS], StopJoinThread;
bool exitRequested;
public:
void Main();
};
bool MyClass::getInput(someTypeOrStruct *input)
{
}
void MyClass::Main()
{
exitRequested = false;
pthread_create( &JoinThread, NULL, (void *(*)(void *))MyClass::Joiner, this);
while(!exitRequested)
{
someTypeOrStruct tmpinput;
if(getInput(&tmpinput))
AsyncDraw(this, tmpinput);
}
if(JoinThread != PTHREAD_NULL)
{
StopJoinThread = true;
pthread_join(JoinThread, NULL);
}
}
void *MyClass::DoDraw(thread_data *arg)
{
if(arg == NULL) return NULL;
thread_data *data = (thread_data *) arg;
data->obj->threadRunning[data->id] = true;
// -> Do your draw here <- //
free(arg);
data->obj->threadRunning[data->id] = false; // Let the joinThread know we are done with this handle...
}
int MyClass::AsyncDraw(MyClass *obj, someTypeOrStruct &input)
{
int timeout = 10; // Adjust higher to make it try harder...
while(timeout)
{
for(int i = 0; i < MAX_THREADS; i++)
{
if(thread[i] == PTHREAD_NULL)
{
thread_data *data = (thread_data *)malloc(sizeof(thread_data));
if(data)
{
data->id = i;
data->obj = this;
data->input = input;
pthread_create( &(thread[i]), NULL,(void* (*)(void*))MyClass::DoDraw, (void *)&data);
return 1;
}
return 0;
}
}
timeout--;
}
}
void *MyClass::Joiner(MyClass * obj)
{
obj->StopJoinThread = false;
while(!obj->StopJoinThread)
{
for(int i = 0; i < MAX_THREADS; i++)
if(!obj->threadRunning[i] && obj->thread[i] != PTHREAD_NULL)
{
pthread_join(obj->thread[i], NULL);
obj->thread[i] = PTHREAD_NULL;
}
}
}
int main(int argc, char **argv)
{
MyClass base;
base.Main();
return 0;
}
This way you can continue accepting input while the draw is occurring.
~~Fixed so the above code actually compiles, make sure to add -lpthread
I have a class defined like this: This is not all complete and probably won't compile.
class Server
{
public:
Server();
~Server();
class Worker
{
public:
Worker(Server& server) : _server(server) { }
~Worker() { }
void Run() { }
void Stop() { }
private:
Server& _server;
}
void Run()
{
while(true) {
// do work
}
}
void Stop()
{
// How do I stop the thread?
}
private:
std::vector<Worker> _workers;
};
My question is, how do I initialize the workers array passing in the outer class named Server.
What I want is a vector of worker threads. Each worker thread has it's own state but can access some other shared data (not shown). Also, how do I create the threads. Should they be created when the class object is first created or externally from a thread_group.
Also, how do I go about shutting down the threads cleanly and safely?
EDIT:
It seems that I can initialize Worker like this:
Server::Server(int thread_count)
: _workers(thread_count), Worker(*this)), _thread_count(thread_count) { }
And I'm currently doing this in Server::Run to create the threads.
boost::thread_group _threads; // a Server member variable
Server::Run(){
for (int i = 0; i < _thread_count; i++)
_threads.create_thread(boost::bind(&Server::Worker::Run, _workers(i)));
// main thread.
while(1) {
// Do stuff
}
_threads.join_all();
}
Does anyone see any problems with this?
And how about safe shutdown?
EDIT:
One problem I have found with it is that the Worker objects don't seem to get constructed!
oops. Yes they do I need a copy constructor on the Worker class.
But oddly, creating the threads results in the copy constructor for Worker being called multiple times.
I have done it with pure WINAPI, look:
#include <stdio.h>
#include <conio.h>
#include <windows.h>
#include <vector>
using namespace std;
class Server
{
public:
class Worker
{
int m_id;
DWORD m_threadId;
HANDLE m_threadHandle;
bool m_active;
friend Server;
public:
Worker (int id)
{
m_id = id;
m_threadId = 0;
m_threadHandle = 0;
m_active = true;
}
static DWORD WINAPI Run (LPVOID lpParam)
{
Worker* p = (Worker*) lpParam; // it's needed because of the static modifier
while (p->m_active)
{
printf ("I'm a thread #%i\n", p->m_id);
Sleep (1000);
}
return 0;
}
void Stop ()
{
m_active = false;
}
};
Server ()
{
m_workers = new vector <Worker*> ();
m_count = 0;
}
~Server ()
{
delete m_workers;
}
void Run ()
{
puts ("Server is run");
}
void Stop ()
{
while (m_count > 0)
RemoveWorker ();
puts ("Server has been stopped");
}
void AddWorker ()
{
HANDLE h;
DWORD threadId;
Worker* n = new Worker (m_count ++);
m_workers->push_back (n);
h = CreateThread (NULL, 0, Worker::Run, (VOID*) n, CREATE_SUSPENDED, &threadId);
n->m_threadHandle = h;
n->m_threadId = threadId;
ResumeThread (h);
}
void RemoveWorker ()
{
HANDLE h;
DWORD threadId;
if (m_count <= 0)
return;
Worker* n = m_workers->at (m_count - 1);
m_workers->pop_back ();
n->Stop ();
TerminateThread (n->m_threadHandle, 0);
m_count --;
delete n;
}
private:
int m_count;
vector <Worker*>* m_workers;
};
int main (void)
{
Server a;
int com = 1;
a.Run ();
while (com)
{
if (kbhit ())
{
switch (getch ())
{
case 27: // escape key code
com = 0;
break;
case 'a': // add worker
a.AddWorker ();
break;
case 'r': // remove worker
a.RemoveWorker ();
break;
}
}
}
a.Stop ();
return 0;
}
There are no synchronization code here, because I haven't enougth time to do it... But I wish it will help you =)
Have you looked at boost asio at all? It looks like it could be a good fit for what you are trying to do. Additionally you can call boost asio's io_service's run (similar to your Run method) from many threads i.e. you can process your IO in many threads. Also of interest could be http://think-async.com/Asio/Recipes for an asio based thread-pool.
Have a look at the asio examples. Perhaps they offer an alternative way of handling what you are trying to do. Esp. have a look at how a clean shutdown is accomplished.
The main() function creates a thread that is supposed to live until the user wishes to exit the program. The thread needs to return values to the main functions at periodic intervals. I tried doing something like this, but hasn't worked well -
std::queue<std::string> q;
void start_thread(int num)
{
std::string str;
//Do some processing
q.push(str);
}
int main()
{
//Thread initialization
int i;
//Start thread
pthread_create(&m_thread,NULL,start_thread,static_cast<void *>i);
while(true)
{
if(q.front())
{
std::cout<<q.front();
return 0;
}
}
//Destroy thread.....
return 0;
}
Any suggestions?
It is not safe to read and write from STL containers concurrently. You need a lock to synchronize access (see pthread_mutex_t).
Your thread pushes a single value into the queue. You seem to be expecting periodic values, so you'll want to modify start_thread to include a loop that calls queue.push.
The return 0; in the consumer loop will exit main() when it finds a value in the queue. You'll always read a single value and exit your program. You should remove that return.
Using if (q.front()) is not the way to test if your queue has values (front assumes at least one element exists). Try if (!q.empty()).
Your while(true) loop is gonna spin your processor somethin' nasty. You should look at condition variables to wait for values in the queue in a nice manner.
try locking a mutex before calling push() / front() on the queue.
Here is a working example of what it looks like you were trying to accomplish:
#include <iostream>
#include <queue>
#include <vector>
#include <semaphore.h>
#include <pthread.h>
struct ThreadData
{
sem_t sem;
pthread_mutex_t mut;
std::queue<std::string> q;
};
void *start_thread(void *num)
{
ThreadData *td = reinterpret_cast<ThreadData *>(num);
std::vector<std::string> v;
std::vector<std::string>::iterator i;
// create some data
v.push_back("one");
v.push_back("two");
v.push_back("three");
v.push_back("four");
i = v.begin();
// pump strings out until no more data
while (i != v.end())
{
// lock the resource and put string in the queue
pthread_mutex_lock(&td->mut);
td->q.push(*i);
pthread_mutex_unlock(&td->mut);
// signal activity
sem_post(&td->sem);
sleep(1);
++i;
}
// signal activity
sem_post(&td->sem);
}
int main()
{
bool exitFlag = false;
pthread_t m_thread;
ThreadData td;
// initialize semaphore to empty
sem_init(&td.sem, 0, 0);
// initialize mutex
pthread_mutex_init(&td.mut, NULL);
//Start thread
if (pthread_create(&m_thread, NULL, start_thread, static_cast<void *>(&td)) != 0)
{
exitFlag = true;
}
while (!exitFlag)
{
if (sem_wait(&td.sem) == 0)
{
pthread_mutex_lock(&td.mut);
if (td.q.empty())
{
exitFlag = true;
}
else
{
std::cout << td.q.front() << std::endl;
td.q.pop();
}
pthread_mutex_unlock(&td.mut);
}
else
{
// something bad happened
exitFlag = true;
}
}
return 0;
}