I have some lua script that have some long running task like getting a web page so I make it yield then the C code handle get page job async, so the thread free to do other job and after a specify time it check back to see is the get page job finished , if so then resume the script. the problem is the thread can't resume the job after async wait.
here is my code I riped it from a class so a little messy sorry
////script:
function Loginmegaupload_com(hp, user, pass, cookie)
setURL(hp, "http://megaupload.com/?c=login")
importPost(hp, "login=1&redir=1")
addPost(hp, "username", user)
addPost(hp, "password", pass)
GetPage()
if isHeaderContain(hp, "user=") ~= nil then
SetFileLink(cookie, GetAllCookie(hp))
return 1
else
return 0
end
end
////c code
int FileSharingService::GetPage(lua_State *ls)
{
return lua_yield(ls, 0);
}
void FileSharingService::AsyncWait(Http_RequestEx *Http, lua_State *LS, boost::asio::deadline_timer* Timer)
{
if( (Http->status_code == Http_RequestEx::ERROR) || (Http->status_code == Http_RequestEx::FISNISHED))
{
if(Http->status_code == Http_RequestEx::FISNISHED)
{
int result = lua_resume(LS, 0); // here I got result == 2 mean error ?
if(result == 0)//lua script exit normal, resume success
{
delete Http;
delete Timer;
}
}
else
return;
}
else
{
Timer->expires_from_now(boost::posix_time::milliseconds(200));
Timer->async_wait(boost::bind(&FileSharingService::AsyncWait, this, Http, LS, Timer));
}
}
bool FileSharingService::Login(string URL, string User, string Pass, string &Cookie)
{
Http_RequestEx *http = new Http_RequestEx;
http->url = URL;
LuaWarper* Lua = Lua_map[boost::this_thread::get_id()]; //one main luaState per ioservice thread
lua_State *thread = lua_newthread(Lua->GetState());
boost::asio::deadline_timer *timer = new boost::asio::deadline_timer(*HClient.ioservice);
string functioname = "Login" + GetServicename(URL);
if( Lua->isFunctionAvaliable(functioname.c_str()) == false )
{
throw(FileSharingService::SERVICE_NOT_AVALIABLE);
}
else
{
lua_getglobal(thread, functioname.c_str());
lua_pushlightuserdata(thread, http);
lua_pushstring(thread, User.c_str());
lua_pushstring(thread, Pass.c_str());
lua_pushlightuserdata(thread, &Cookie);
int result = lua_resume(thread, 4);
if(result == LUA_YIELD)
{
HClient.Do(*http, false);
AsyncWait(http, thread, timer);
}
else if(result == 0)
{
//fisnished at first call
}
else
{
//yield error, will handle late
}
}
}
Sorry never mind this question, lua_resume return 2 mean error but script work just fine, asio get page work fine too, and I tracked down the line that respond for fail of lua_resume :
httpinfo.header.append(buffer, (HeaderEndIndex-buffer+2) );
if I comment that line lua_resume work as expected it return 0 mean script exit, this line don't do any thing that can affect the lua thread state it just a string assign, I checked there no overflow. so weird.
Related
I am debugging an issue with WinDbg which I can consistently produce. The problem is when I run the executable with WinDbg to debug it, the issue can't be reproduced. What could be the reason?
Here is the code the behaves differently:
CWnd* pWnd = GetDlgItem(IDOKCANCEL);
if(pWnd)
{
CString sOK;
sOK.LoadString(IDS_OK);
pWnd->SetWindowText(sOK);
}
Here the button text is updated properly when I run with WinDbg but it is not updated when I run it normally (which is the bug).
Update
Like I said in comments, the issue is not with the code above because it's doesn't even get called. The operation is done in a worker thread which sends update messages to this dialog. The final message that executes the above code is never send do it so the above code is never executed.
Why the worker thread doesn't send this message is interesting. It ges locked on a critical section while opening a database. WinDbg tells me that the main thread is the owner of that critical section but I can't see from call stack or any other way where does it failed to unlock the critical section.
What complicates the problem is that it works fine if I run it with debugger. I added log output but it also starts to works fine with this change.
The only way I can catch it with a debugger is when I run it normal mode, produce the problem, then attach the debugger and it shows me its locked on the critical section. It shows the main thread is the owner of that critical section but it not clear why it is in locked state. The critical section is simply locked and unlocked in one function and its out of there.
Update 2
I am using the critical section only in one file in my entire project and there in only two functions (when it opens database and recordset).
BOOL CADODatabase::Open(LPCTSTR lpstrConnection, LPCTSTR lpstrUserID, LPCTSTR lpstrPassword)
{
CString database = GetSourceDatabase( lpstrConnection, NULL );
// get the appropriate critical section based on database
g_dbCriticalSection = GetDbCriticalSection( database );
if( g_dbCriticalSection)
g_dbCriticalSection->Lock();
HRESULT hr = S_OK;
if(IsOpen())
Close();
if(wcscmp(lpstrConnection, _T("")) != 0)
m_strConnection = lpstrConnection;
ASSERT(!m_strConnection.IsEmpty());
try
{
if(m_nConnectionTimeout != 0)
m_pConnection->PutConnectionTimeout(m_nConnectionTimeout);
hr = m_pConnection->Open(_bstr_t(m_strConnection), _bstr_t(lpstrUserID), _bstr_t(lpstrPassword), NULL);
if( g_dbCriticalSection)
g_dbCriticalSection->Unlock();
return hr == S_OK;
}
catch(_com_error &e)
{
dump_com_error(e);
if( g_dbCriticalSection)
g_dbCriticalSection->Unlock();
return FALSE;
}
}
The 2nd function has other visible imperfections but please ignore that, its legacy code.
BOOL CADORecordset::Open(_ConnectionPtr mpdb, LPCTSTR lpstrExec, int nOption)
{
BSTR bstrConnString;
m_pConnection->get_ConnectionString(&bstrConnString);
CString database = GetSourceDatabase( bstrConnString, m_pConnection );
g_dbCriticalSection = GetDbCriticalSection( database );
if( g_dbCriticalSection)
g_dbCriticalSection->Lock();
Close();
if(wcscmp(lpstrExec, _T("")) != 0)
m_strQuery = lpstrExec;
ASSERT(!m_strQuery.IsEmpty());
if(m_pConnection == NULL)
m_pConnection = mpdb;
m_strQuery.TrimLeft();
BOOL bIsSelect = m_strQuery.Mid(0, _tcslen(_T("Select "))).CompareNoCase(_T("select ")) == 0 && nOption == openUnknown;
int maxRetries = 10;
bool bContinue = true;
CursorTypeEnum adCursorType = adOpenStatic;
if (!m_bSQLEngine)
{
// MDB Engine
adCursorType = adOpenStatic;
m_pConnection->CursorLocation = adUseClient;
}
else
{
// SQL Engine
adCursorType = adOpenDynamic;
m_pConnection->CursorLocation = adUseServer;
}
int currentCommandTimeout = m_pConnection->CommandTimeout;
if( g_dbCriticalSection)
g_dbCriticalSection->Unlock();
for (int iRetry = 0; (iRetry < maxRetries) && bContinue; iRetry++)
{
try
{
// we just use an auto lock object so it is unlocked automatically, it uses same
// critical section object.
if( g_dbCriticalSection)
g_dbCriticalSection->Lock();
int newCommandTimeout = currentCommandTimeout + 15 * iRetry;
m_pConnection->CommandTimeout = newCommandTimeout;
if(bIsSelect || nOption == openQuery || nOption == openUnknown)
{
m_pRecordset->Open((LPCTSTR)m_strQuery, _variant_t((IDispatch*)mpdb, TRUE),
adCursorType, adLockOptimistic, adCmdUnknown);
}
else if(nOption == openTable)
{
m_pRecordset->Open((LPCTSTR)m_strQuery, _variant_t((IDispatch*)mpdb, TRUE),
adOpenDynamic, adLockOptimistic, adCmdTable);
}
else if(nOption == openStoredProc)
{
m_pCmd->ActiveConnection = mpdb;
m_pCmd->CommandText = _bstr_t(m_strQuery);
m_pCmd->CommandType = adCmdStoredProc;
m_pRecordset = m_pCmd->Execute(NULL, NULL, adCmdText);
}
else
{
TRACE( _T("Unknown parameter. %d"), nOption);
if( g_dbCriticalSection)
g_dbCriticalSection->Unlock();
return FALSE;
}
if( g_dbCriticalSection)
g_dbCriticalSection->Unlock();
bContinue = false;
}
catch(_com_error &e)
{
if( g_dbCriticalSection)
g_dbCriticalSection->Unlock();
dump_com_error_without_exception(e, _T("Open"));
// retry Query timeout
CString szDescription;
_bstr_t bstrDescription(e.Description());
szDescription.Format( _T("%s"), (LPCTSTR)bstrDescription);
if ((szDescription.Find(_T("Query timeout expired")) == -1) || (iRetry == maxRetries - 1))
{
m_pConnection->CommandTimeout = currentCommandTimeout;
throw CADOException(e.Error(), e.Description());
}
Sleep (1000);
bContinue = true;
}
}
m_pConnection->CommandTimeout = currentCommandTimeout;
return m_pRecordset != NULL && m_pRecordset->GetState()!= adStateClosed;
}
For the sake of completeness, the above calls this function:
static CCriticalSection* GetDbCriticalSection(const CString& database)
{
// For now we only care about one database and its corresponding critical section
if (database.CompareNoCase( _T("Alr") ) == 0)
return &g_csAlrDb; // g_csAlrDb is defined static global in this file
else
return 0;
}
The Open() function gets called for various databases, I am only locking guarding access to one database. As you can see there is corresponding lock/unlocks so not sure how does code comes up of these functions leave th critical section locked. Could it be because of MFC issue?
In my case, most of the time, when C++ software behaves different between debug and release versions, it's because of uninitialized variables, different libraries linked, or compiler optimizations backfiring.
To trace the bug, try evaluating variables and function return values, i.e. LoadString, for example with AfxMessageBox().
I run into a rather strange situation when using std::future and ThreadPool, though I do not think it's ThreadPool (I'm using https://github.com/bandi13/ThreadPool/blob/master/example.cpp) since (I've tried multiple forks of it and after some debugging I do not see how it would be related to the issue).
The issue is that under certain situation my doProcess method just goes nirvana - it does not return. It just disappears midst of a long running loop.
Therefore I think I must be doing something wrong, but can't figure out what.
Here's the code:
ThreadPool pool(numThreads);
std::vector< std::future<bool> > futures;
int count = 0;
string orgOut = outFile;
for (auto fileToProcess : filesToProcess) {
count++;
outFile = orgOut + std::to_string(count);
// enque processing in the thread pool
futures.emplace_back(
pool.enqueue([count, fileToProcess, outFile, filteredKeys, sql] {
return doProcess(fileToProcess, outFile, filteredKeys, sql);
})
);
}
Then I wait for all processings to be done (I think this could be done in a more elegant way also):
bool done = false;
while (!done) {
done = true;
for (auto && futr : futures) {
auto status = futr.wait_for(std::chrono::milliseconds(1));
if (status != std::future_status::ready) {
done = false;
break;
}
}
}
Edit: At first I also tried the obvius wait(), with the same result however:
bool done = false;
while (!done) {
done = true;
for (auto && futr : futures) {
futr.wait();
}
}
Edit: The doProcess() method. The behavior is this: The loopcnt variable is just a counter to debug how often the method was entered and the loop started. As you can see, there is no return from this loop, but the thread just vanishes when inside this loop with no error whatsoever and wasHereCnt is reached only occasionally (like 1 of 100 times the method is run). I'm really puzzled.
bool doProcess([...]) {
// ....
vector<vector<KVO*>*>& features = filter.result();
vector<vector<KVO*>*> filteredFeatures;
static int loopcnt = 0;
std::cout << "loops " << loopcnt << endl;
loopcnt++;
for (vector<KVO*>* feature : features) {
for (KVO *kv : *feature) {
switch (kv->value.type()) {
case Variant::JNULL:
sqlFilter.setNullValue(kv->key);
break;
case Variant::INT:
sqlFilter.setValue(static_cast<int64_t>(kv->value), kv->key);
break;
case Variant::UINT:
sqlFilter.setValue(static_cast<int64_t>(kv->value), kv->key);
break;
case Variant::DOUBLE:
sqlFilter.setValue(static_cast<double>(kv->value), kv->key);
break;
case Variant::STRING:
sqlFilter.setValue(static_cast<string>(kv->value), kv->key);
break;
default:
assert(false);
break;
}
}
int filterResult = sqlFilter.exec();
if (filterResult > 0) {
filteredFeatures.push_back(feature);
}
sqlFilter.reset();
}
static int wasHereCnt = 0;
std::cout << "was here: " << wasHereCnt << endl;
wasHereCnt++;
JsonWriter<Writer<FileWriteStream>> geojsonWriter(writer, filteredFeatures);
bool res = geojsonWriter.write();
os.Flush();
fclose(fp);
return res;
}
The doProcess method does work when it's taking less time. It breaks and disappears when it takes somewhat more time. The difference being just the complexity of an SQL query I run in the method. So I don't post the code for doProcess().
What causes the thread of the thread pool to be interrupted, and how to fix it?
UPDATE
Well, I found it out. After several hours I decided to remove the future tasks and just ran the task on the main thread. The issue was that an exception was thrown via:
throw std::runtime_error("bad cast");
... some time down the code flow after this:
case Variant::UINT:
sqlFilter.setValue(static_cast<int64_t>(kv->value), kv->key);
break;
This error was thrown as expected when running on the main thread. But it's never raised when run as future task. This is really odd and seems like a compiler or debugger issue.
I have an HTTP game server that I am setting up and I have one function that returns a lot of information about the map. The output from the server is about 7800 characters long, but when I get the contents of the URL in the game, the game only gets 1124 characters.
Is there a limit on the length of the response content of an IHttpRequest?
Pertinent code:
FString ANetwork::getContentsOfURL(FString URL, TArray<FString> keys, TArray<FString> values)
{
serverResponse = NULL;
TSharedRef<IHttpRequest> HttpRequest = FHttpModule::Get().CreateRequest();
HttpRequest->SetHeader(TEXT("Content-Type"), TEXT("application/json"));
int32 count = keys.Num();
URL += "?auth=" + authenticator;
for (int i = 0; i < count; i++)
{
URL += "&" + keys[i] + "=" + values[i];
}
HttpRequest->SetURL(URL);
HttpRequest->SetVerb(TEXT("GET"));
HttpRequest->OnProcessRequestComplete().BindUObject(this, &ANetwork::OnResponseReceived);
HttpRequest->ProcessRequest();
bool wait = true;
while (wait)
{
FHttpResponsePtr response = HttpRequest->GetResponse();
FHttpResponsePtr httpnull;
if (response != httpnull)
{
if (HttpRequest->GetResponse()->GetContentAsString() != "")
{
return HttpRequest->GetResponse()->GetContentAsString();
}
}
}
return "";
}
On a side note, I'm not sure how to check if an FHttpResponsePtr points to a null object. I thought I had it with that code in the while loop, but it doesn't seem to have made a difference. Once in a while, the code will break because the response is null when I try to access the content as a string.
Anyone know how to properly check if it is null?
Edit:
Per #TheBrain's answer, here is my revised loop:
bool wait = true;
while (wait)
{
if (HttpRequest->GetStatus() != EHttpRequestStatus::Processing)
{
FHttpResponsePtr response = HttpRequest->GetResponse();
if (response.IsValid())
{
return response->GetContentAsString();
}
else
return "INVALID";
}
}
return "";
This causes an infinite loop, however.
I don't think there is such a small limit to the response. It looks more like you are fetching the response before it has actually processed the request. You should try to call GetResponse() only after GetStatus() retuns something other than Processing.
On the nullpointer check: FHttpResponsePtr is nothing other than a TSharedPtr. As with any TSharedPtr you can use IsValid() on the pointer itself. For example, with your code from above:
FHttpResponsePtr response = HttpRequest->GetResponse();
if (response != nullptr) { // wrong, the pointer itself is never null!
if (response.IsValid()) { // correct, check for pointer validity
if (response.Get() != nullptr) { // basically the same, but longer
EDIT:
Sorry for the misunderstanding. You must never block the game loop with a while loop like that. So you have two possibilities:
You do the check from the while loop, but only once during your actor's tick event.
You wait for your callback delegate to fire.
Here is a working code sample using a delegate:
void AYourActor::NetworkTest()
{
TSharedRef<IHttpRequest> HttpRequest = FHttpModule::Get().CreateRequest();
HttpRequest->SetHeader(TEXT("Content-Type"), TEXT("application/json"));
HttpRequest->SetURL("http://www.google.com");
HttpRequest->SetVerb(TEXT("GET"));
HttpRequest->OnProcessRequestComplete().BindUObject(this, &AYourActor::OnResponseReceived);
HttpRequest->ProcessRequest();
}
void AYourActor::OnResponseReceived(FHttpRequestPtr request, FHttpResponsePtr response, bool didConnect)
{
UE_LOG(LogExec, Warning, TEXT("Response received %d!"), didConnect);
UE_LOG(LogExec, Warning, TEXT("Response: %s"), *(response->GetContentAsString()));
}
Consider the following example taken from N3650:
int cnt = 0;
do {
cnt = await streamR.read(512, buf);
if (cnt == 0)
break;
cnt = await streamW.write(cnt, buf);
} while (cnt > 0);
I am probably missing something, but if I understood async and await well, what is the point in showing the usefulness of the two constructs with the above example when the effects are equivalent to writing:
int cnt = 0;
do {
cnt = streamR.read(512, buf).get();
if (cnt == 0)
break;
cnt = streamW.write(cnt, buf).get();
} while (cnt > 0);
where both the read().get() and write().get() calls are synchronous?
The await keyword is not equal to calling get on a future. You might look at it more like this, suppose you start from this:
future<T> complex_function()
{
do_some_stuff();
future<Result> x = await some_async_operation();
return do_some_other_stuff(x);
}
This is functionally more or less the same as
future<T> complex_function()
{
do_some_stuff();
return some_async_operation().then([=](future<Result> x) {
return do_some_other_stuff(x);
});
}
Note the more or less, because there are some resource management implications, variables created in do_some_stuff shouldn't be copied to execute do_some_other_stuff like the lambda version will do.
The second variant makes it more clear what will happen upon invocation.
The do_some_stuff() will be invoked synchronously when you call complex_function.
some_async_operation is called asynchronously and results in a future. The exact moment when this operation is executed depends on your actual asynchronous calling implementation, it might be immediate when you use threads, it might be whenever the .get() is called when you use defered execution.
We don't execute do_some_other_stuff immediately, but rather chain it to the future obtained in step 2. This means that it can be executed as soon as the result from some_async_operation is ready but not before. Aside from that, it's moment of execution is determined by the runtime. If the implementation would just wrap the then proposal, this means it would inherit the parent future's executor/launch policy (as per N3558).
The function returns the last future, that represents the eventual result. Note this NEEDS to be a future, as part of the function body is executed asynchronously.
A more complete example (hopefully correct):
future<void> forwardMsgs(istream& streamR, ostream& streamW) async
{
char buf[512];
int cnt = 0;
do {
cnt = await streamR.read(512, buf);
if (cnt == 0)
break;
cnt = await streamW.write(cnt, buf);
} while (cnt > 0);
}
future<void> fut = forwardMsgs(myStreamR, myStreamW);
/* do something */
fut.get();
The important point is (quoting from the draft):
After suspending, a resumable function may be resumed by the scheduling logic of the runtime and will eventually complete its logic, at which point it executes a return statement (explicit or implicit) and sets the function’s result value in the placeholder.
and:
A resumable function may continue execution on another thread after resuming following a suspension of its execution.
That is, the thread who originally called forwardMsgs can return at any of the suspension points. If it does, during the /* do something */ line, the code inside forwardMsgs can be executed by another thread even though the function has been called "synchronously".
This example is very similar to
future<void> fut = std::async(forwardMsgs, myStreamR, myStreamW);
/* do something */
fut.get();
The difference is the resumable function can be executed by different threads: a different thread can resume execution (of the resumable function) after each resumption/suspension point.
I think the idea is that the streamR.read() and streamW.write() calls are asynchronous I/O operations and return futures, which are automatically waited on by the await expressions.
So the equivalent synchronous version would have to call future::get() to obtain the results e.g.
int cnt = 0;
do {
cnt = streamR.read(512, buf).get();
if (cnt == 0)
break;
cnt = streamW.write(cnt, buf).get();
} while (cnt > 0);
You're correct to point out that there is no concurrency here. However in the context of a resumable function the await makes the behaviour different to the snippet above. When the await is reached the function will return a future, so the caller of the function can proceed without blocking even if the resumable function is blocked at an await while waiting for some other result (e.g. in this case the read() or write() calls to finish.) The resumable function might resume running asynchronously, so the result becomes available in the background while the caller is doing something else.
Here's the correct translation of the example function to not use await:
struct Copy$StackFrame {
promise<void> $result;
input_stream& streamR;
output_stream& streamW;
int cnt;
char buf[512];
};
using Copy$StackPtr = std::shared_ptr<Copy$StackFrame>;
future<void> Copy(input_stream& streamR, output_stream& streamW) {
Copy$StackPtr $stack{ new Copy$StackFrame{ {}, streamR, streamW, 0 } };
future<int> f$1 = $stack->streamR.read(512, stack->buf);
f$1.then([$stack](future<int> f) { Copy$Cont1($stack, std::move(f)); });
return $stack->$result.get_future();
}
void Copy$Cont1(Copy$StackPtr $stack, future<int> f$1) {
try {
$stack->cnt = f$1.get();
if ($stack->cnt == 0) {
// break;
$stack->$result.set_value();
return;
}
future<int> f$2 = $stack->streamW.write($stack->cnt, $stack->buf);
f$2.then([$stack](future<int> f) { Copy$Cont2($stack, std::move(f)); });
} catch (...) {
$stack->$result.set_exception(std::current_exception());
}
}
void Copy$Cont2(Copy$StackPtr $stack, future<int> f$2) {
try {
$stack->cnt = f$2.get();
// while (cnt > 0)
if (cnt <= 0) {
$stack->$result.set_value();
return;
}
future<int> f$1 = $stack->streamR.read(512, stack->buf);
f$1.then([$stack](future<int> f) { Copy$Cont1($stack, std::move(f)); });
} catch (...) {
$stack->$result.set_exception(std::current_exception());
}
}
As you can see, the compiler transformation here is quite complex. The key point here is that, unlike the get() version, the original Copy returns its future as soon as the first async call has been made.
I have the same issue with the meaning of the difference between these two code samples. Let's re write them a little to be more complete.
// Having two functions
future<void> f (istream&streamR, ostream&streamW) async
{ int cnt = 0;
do {
cnt = await streamR.read(512, buf);
if (cnt == 0)
break;
cnt = await streamW.write(cnt, buf);
} while (cnt > 0);
}
void g(istream&streamR, ostream&streamW)
{ int cnt = 0;
do {
cnt = streamR.read(512, buf).get();
if (cnt == 0)
break;
cnt = streamW.write(cnt, buf).get();
} while (cnt > 0);
}
// what is the difference between
auto a = f(streamR, streamW);
// and
auto b = async(g, streamR, streamW);
You still need at least three stacks. In both cases main thread is not blocked. Is it assumption that await would be implemented by compiler more efficiently than future<>:get()?. Well, the one without await can be used now.
Thanks
Adam Zielinski
Im implementing a chat application using Jabber/XMPP and gloox framework which should send and receive messages concurrently in Ubuntu Linux.
My current code implementation is :
int main()
{
...
int temp = pthread_create(&iSend, NULL, SendMessage, &b);
int temp1 = pthread_create(&iRecv, NULL, ConnServer, &b);
}
void* ConnServer(void *athis)
{
UserClient *t = (UserClient*)athis;
t->ConnecttosServer();
}
bool UserClient::ConnecttosServer()
{
//JID jid( "map#jo-hely.hu/gloox" );
j = new Client( iUserJid, iUser.getPassword() );
j->registerMessageHandler( this);
j->registerConnectionListener( this );
j->registerMessageSessionHandler(this);
bool result = j->connect(false);
if(result == true)
{
iConnected = true;
ConnectionError er = ConnNoError;
ConnectionError er1 = ConnNoError;
while(er == ConnNoError || er1 == ConnNoError)
{
er = j->recv(500000);
sleep(2);
}
delete j;
}
...
}
void* SendMessage(void * athis )// JID *aDest)
{
UserClient *t = (UserClient*)athis;
//JID *t = (JID)dest;
string ip ;
cout << "enter here";
cin >> ip;
if(t->iConnected == true)
{
if(t->iMessageSession == NULL )
{
string aBody = "hello";
MessageSession *session = new MessageSession(t->j, t->iDestJid);
session->registerMessageHandler(t);
session->send(aBody.c_str());
}
}
}
The problem faced is both the threads are created and pthread_join( ) is called for both.
The iSend thread is scheduled first but gets suspended at cin. Once the recv( ) function is called, which runs in iRecv thread, the recv call back function handleMessage( ) is called. However the control never shifts back to the iSend thread which should call SendMessage( ) function.
Please help
I cannot see in there how SendMessage ever sends more than one "hello" message.
There are various memory issues here of course, eg j won't get deleted at all if connect failed, and as its scope is function-only there is no real need to create it with new at all.
You cannot count on the iSend thread being scheduled first. That is completely up to the operating system.