I know, this may be a lot to ask, but can anyone help me debug this code:
#include <stdio.h>
#include <string.h>
#include <omnetpp.h>
using namespace omnetpp;
class Node : public cSimpleModule
{
private:
cMessage *out_msg;
long no_sent = 0;
long no_rcvd = 0;
cOutVector rcvdRecord;
cLongHistogram Statistics;
public:
Node();
virtual ~Node();
protected:
virtual void initialize() override;
virtual void handleMessage(cMessage *msg) override;
virtual void finish() override;
};
Define_Module(Node);
Node::Node()
{
out_msg = nullptr;
}
Node::~Node()
{
delete out_msg;
}
void Node::initialize()
{
out_msg = nullptr;
if (strcmp("sender", getName()) == 0) {
EV << "Scheduling first send to t=5.0s\n";
scheduleAt(5.0, out_msg);
out_msg = new cMessage("Sending Message");
}
}
void Node::handleMessage(cMessage *msg)
{
if (msg == out_msg) {
EV << "Sending message to receiver\n";
send(out_msg, "out");
out_msg = nullptr;
no_sent++;
simtime_t delay = par("delayTime");
scheduleAt(simTime() + delay, out_msg);
}
else {
out_msg = msg;
no_rcvd++;
rcvdRecord.record(out_msg);
Statistics.collect(out_msg); //what's going on here ?
}
}
void Node::finish()
{
EV << "Sent: " << no_sent << endl;
EV << "Received: " << no_rcvd << endl;
EV << "Messages sent, mean: " << Statistics.getMean() << endl;
EV << "Messages sent, standard deviation: " << Statistics.getStddev() << endl;
EV << "Messages sent, variance: " << Statistics.getVariance() << endl;
recordScalar("#sent", no_sent);
recordScalar("#received", no_rcvd);
Statistics.recordAs("Message Statistics");
}
I get the following error message:
Exercise2.cc:66:38: error: no matching function for call to
'omnetpp::cOutVector::record(omnetpp::cMessage*&)'
Exercise2.cc:67:39: error: no matching function for call to
'omnetpp::cLongHistogram::collect(omnetpp::cMessage*&)'
So I really don't know what this is supposed to tell me. Aren't these built-in functions, part of the cOutVector or cLongHistogram classes respectively?
Aren't these built-in functions, part of the cOutVector or
cLongHistogram classes respectively?
They aren't. Well, cOutVector does have a member function named record, it just can't take a cMessage * as an argument, so that specific function overload you wanted to use doesn't exist. Same with cLongHistogram and collect.
Just take a look at the documentation:
A cOutVector object can write doubles to the output vector file ...
And, by the way, what exactly do you expect to see as a "histogram of messages"? :D This comic comes to my mind...
To record the messages (not into a cOutVector), you can enable event logging. The resulting file can be visualized in the Sequence Chart tool of the IDE, see: https://docs.omnetpp.org/tutorials/tictoc/part2/#25-visualizing-on-a-sequence-chart
Related
I'm a newbie in OMNeT++ and I want to make a simple broadcast mechanism where starting node sends a msg and it will broadcast it to all nodes connecting in a mesh topology. When msg reaches its destination, the msg will be deleted and does not broadcast further more. Continuously, I'm working on it from 2 days by following it's official documentation. My code is successfully simulated. But it doesn't delete the msg when it reaches to destination. I'm starting with just 25 nodes (computer) in which source node is computer0 and destination node is computer24. Tell me, where I'm wrong. Below is my source code:
#include <stdio.h>
#include <string.h>
#include <omnetpp.h>
using namespace omnetpp;
class computer : public cSimpleModule
{
protected:
virtual void forwardMessage(cMessage *msg);
virtual void initialize() override;
virtual void handleMessage(cMessage *msg) override;
};
Define_Module(computer);
void computer::initialize()
{
if (getIndex() == 0) {
char msgname[20];
sprintf(msgname, "msg-%d", getIndex());
cMessage *msg = new cMessage(msgname);
scheduleAt(0.0, msg);
}
}
void computer::handleMessage(cMessage *msg)
{
if (getIndex() == 24) {
EV << "Message " << msg << " arrived.\n";
delete msg;
}
else {
forwardMessage(msg);
}
}
void computer::forwardMessage(cMessage *msg)
{
int n = gateSize("gate");
int k = intuniform(0, n-1);
EV << "Forwarding message " << msg << " on gate[" << k << "]\n";
send(msg, "gate$o", k);
}
simulation snap: https://i.stack.imgur.com/a4JRQ.png
You use getIndex() method that return the position of a module in the array of modules. However, the simulation snap shows that you do not use array of modules. Therefore getIndex() returns zero for every module.
The simplest way to correct your model is to check the name of a module instead of its position, for example:
void computer::initialize()
{
// if (getIndex() == 0) {
if (isName("computer0")) {
char msgname[20];
// ...
void computer::handleMessage(cMessage *msg)
{
// if (getIndex() == 24) {
if (isName("computer24")) {
EV << "Message " << msg << " arrived.\n";
// ...
I have written this code that simulates a parking system, however I have a problem when I have to reuse a struct that contains a messageQueue to my carthread. The problem occurs when I try to communicate to the handler for exit with ID_CAR_IND. The struct is send as a message, so I suspect that it gets deleted before arriving, but I can't seem to grasp what is happening, and where it goes wrong. It should be noted that it is a requirement implementation that pthread is used and 3 threads are created.
#include <iostream>
#include <pthread.h>
#include <stdlib.h>
#include <queue>
#include <time.h>
#include <unistd.h>
#include "Message.h"
#include "MsgQueue.h"
using namespace std;
enum{
ID_START_IND,
ID_ENTRY_REQ,
ID_ENTRY_CFM,
ID_CAR_IND,
ID_STAY_HERE,
ID_EXIT_REQ,
ID_EXIT_OUT,
ID_EXIT_CFM
};
//MESSAGES
struct Car : public Message
{
Car(int carID, MsgQueue* queue) : id(carID), carmq(queue){}
MsgQueue *carmq;
int id;
};
struct OpenReq : public Message
{
MsgQueue *Who_is_asking_;
};
struct CloseReq : public Message
{
MsgQueue *Who_is_asking_exit;
};
struct EntryDoorOpen : public Message{
bool result;
};
MsgQueue entryMq(20);
MsgQueue exitMq(20);
void carHandler(Car* car, unsigned id, Message* msg){
switch(id){
case ID_START_IND:
{
cout << "car " << car->id << " wants to enter" << endl;
OpenReq * req = new OpenReq();
req->Who_is_asking_ = car->carmq;
entryMq.send(ID_ENTRY_REQ, req);
}
break;
case ID_ENTRY_CFM:
{
cout << "car " << car->id << " entered parking" << endl;
entryMq.send(ID_CAR_IND);
}
break;
case ID_STAY_HERE:
{
}
break;
case ID_EXIT_CFM:
{
cout << "car " << car->id << "Left parking" << endl;
exitMq.send(ID_EXIT_OUT);
}
break;
default:
break;
}
}
void entryHandler(unsigned id, Message* msg){
OpenReq* req=static_cast<OpenReq*>(msg);
switch(id){
case ID_ENTRY_REQ:
{
cout << "Access granted. Opening entry door " << endl;
req->Who_is_asking_->send(ID_ENTRY_CFM);
}
break;
case ID_CAR_IND:
{
cout << "Closing entry door " << endl;
sleep(2);
req->Who_is_asking_->send(ID_EXIT_REQ);
}
break;
default:
break;
}
}
void exitHandler(unsigned id, Message * msg)
{
OpenReq* req = static_cast<OpenReq*>(msg);
switch(id)
{
case ID_EXIT_REQ:
{
cout << "Leaving is Granted. Opening exit door" << endl;
req->Who_is_asking_->send(ID_EXIT_CFM);
}
break;
case ID_EXIT_OUT:
{
cout << "Car has left the parkinglot" << endl;
}
break;
default:
break;
}
}
void *car(void* data){
Car *car = static_cast<Car*>(data);
car->carmq->send(ID_START_IND);
for(;;){
unsigned long id;
Message *msg = car->carmq->receive(id);
carHandler(car,id,msg);
delete(msg);
}
}
void *entry(void* data){
for(;;){
unsigned long id;
Message *msg = entryMq.receive(id);
entryHandler(id,msg);
delete(msg);
}
}
void *exit(void * data){
for(;;){
unsigned long id;
Message *msg = exitMq.receive(id);
exitHandler(id,msg);
delete(msg);
}
}
int main()
{
MsgQueue q(10);
Car carObj(1, &q);
pthread_t carThread, entryThread;
pthread_create(&carThread,nullptr,car, &carObj);
pthread_create(&entryThread,nullptr,entry, nullptr);
pthread_join(carThread,nullptr);
return 0;
}
//
// Created by stud on 11/3/19.
//
#include "MsgQueue.h"
#include "Message.h"
#include <iostream>
MsgQueue::MsgQueue(unsigned long maxSize) : maxSize_(maxSize)
{
//Init pthread funktionerne.
pthread_mutex_init(&msgmutex, NULL);
pthread_cond_init(&msgcond,NULL);
};
void MsgQueue::send(unsigned long id, Message* msg)
{
pthread_mutex_lock(&msgmutex);
while(msgqueue_.size() == maxSize_)
{
pthread_cond_wait(&msgcond, &msgmutex);
}
info besked;
besked.id = id;
besked.msg = msg;
msgqueue_.push(besked);
pthread_cond_broadcast(&msgcond);
pthread_mutex_unlock(&msgmutex);
//std::cout << "sending from id #" << id << std::endl;
};
Message* MsgQueue::receive(unsigned long&id)
{
pthread_mutex_lock(&msgmutex);
while(msgqueue_.empty())
{
pthread_cond_wait(&msgcond,&msgmutex);
}
info besked;
besked = msgqueue_.front();
id = besked.id;
msgqueue_.pop();
pthread_cond_broadcast(&msgcond);
pthread_mutex_unlock(&msgmutex);
return besked.msg;
};
MsgQueue::~MsgQueue()
{
pthread_mutex_destroy(&msgmutex);
pthread_cond_destroy(&msgcond);
};
//
// Created by stud on 11/3/19.
//
#pragma once
#include <iostream>
#include <pthread.h>
#include "Message.h"
#include <queue>
struct info : public Message
{
unsigned long id;
Message* msg;
};
class MsgQueue
{
public:
MsgQueue(unsigned long maxSize);
void send(unsigned long id, Message* msg = NULL);
Message* receive(unsigned long&id);
~MsgQueue();
private:
unsigned long maxSize_;
std::queue <info> msgqueue_;
pthread_cond_t msgcond;
pthread_mutex_t msgmutex;
};
//
//
//
#pragma once
class Message
{
public:
virtual ~Message(){};
};
The question currently lacks a few details (like details about what exactly your problem is, including any error message you get), but we can speculate some on what the problem is. Since the problem is with ID_CAR_IND, lets start by examining the handler for that message. It prints a message, then dereferences a pointer that is a property of the message. Nothing obviously wrong there.
So let's check where we create that message. It sends just the message ID. Furthere investigation shows that send takes an optional second parameter. Since it is not provided, this will be a nullptr in the message that is sent.
Since this extra message data is needed by the message handler, and not provided, this results in dereferencing a null pointer in the handler, resulting in Undefined Behavior and (typically) a crash of the program with something like an access violation error.
The solution is to pass an OpenReq object to send (like you are for the ID_START_IND message).
Consider revising the code to check or remove consecutive uses of the -> operator. There is some way in which the code cannot catch the null value.
Why don't you use OS API instead of implementing your own MsgQueue?
I have problem in a piece of real-life code, where a function belonging to a deleted class is called by a boost::asio::deadline_timer, occasionally leading to a segmentation fault.
The issue I'm having is that the deletion of the deadline_timer is run from another timer on the same io_service. The deletion of the first deadline_timer will trigger one final call to the function to be run, with a boost::asio::error::operation_aborted error. However this can only be scheduled on the (same) io_service after the delete has finished, but by then the object is already deleted and thus no longer valid.
So my question is: how can I prevent this from happening?
The following is a simplified example with the same fault:
//============================================================================
// Name : aTimeToKill.cpp
// Author : Pelle
// Description : Delete an object using a timer, from a timer
//============================================================================
#include <iostream>
#include <boost/function.hpp>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using namespace std;
using namespace boost;
struct TimeBomb
{
bool m_active;
asio::deadline_timer m_runTimer;
TimeBomb(boost::asio::io_service& ioService)
: m_active(true)
, m_runTimer(ioService)
{
cout << "Bomb placed #"<< hex << (int)this << endl;
m_runTimer.expires_from_now(boost::posix_time::millisec(1000));
m_runTimer.async_wait(boost::bind(&TimeBomb::executeStepFunction, this, _1));
}
~TimeBomb()
{
m_active = false;
m_runTimer.cancel();
cout << "Bomb defused #"<< hex << (int)this << endl;
}
void executeStepFunction(const boost::system::error_code& error)
{
// Canceled timer
if (error == boost::asio::error::operation_aborted)
{
std::cout << "Timer aborted: " << error.message() << " #" << std::hex << (int)this << std::endl;
return;
}
if (m_active)
{
// Schedule next step
cout << "tick .." <<endl;
m_runTimer.expires_from_now(
boost::posix_time::millisec(1000));
m_runTimer.async_wait(boost::bind(&TimeBomb::executeStepFunction, this, _1));
}
}
};
struct BomberMan
{
asio::deadline_timer m_selfDestructTimer;
TimeBomb* myBomb;
BomberMan(boost::asio::io_service& ioService)
: m_selfDestructTimer(ioService)
{
cout << "BomberMan ready " << endl;
myBomb = new TimeBomb(ioService);
m_selfDestructTimer.expires_from_now(boost::posix_time::millisec(10500));
m_selfDestructTimer.async_wait(boost::bind(&BomberMan::defuseBomb, this, _1));
}
void defuseBomb(const boost::system::error_code& error)
{
cout << "Defusing TimeBomb" << endl;
delete myBomb;
}
};
int main()
{
boost::asio::io_service m_ioService;
BomberMan* b = new BomberMan(m_ioService);
m_ioService.run();
return 0;
}
./aTimeToKill
BomberMan ready
Bomb placed #9c27198
tick ..
tick ..
tick ..
tick ..
tick ..
tick ..
tick ..
tick ..
tick ..
tick ..
Defusing TimeBomb
Bomb defused #9c27198
Timer aborted: Operation canceled #9c27198
The last line is printed after the delete, illustrating my problem.
The typical recipe to solve this problem is to use a shared_ptr
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/shared_ptr.hpp>
#include <iostream>
using namespace std;
struct TimeBomb : public boost::enable_shared_from_this<TimeBomb>
{
bool m_active;
boost::asio::deadline_timer m_runTimer;
TimeBomb(boost::asio::io_service& ioService)
: m_active(true)
, m_runTimer(ioService)
{
cout << "Bomb placed #"<< hex << this << endl;
m_runTimer.expires_from_now(boost::posix_time::millisec(1000));
}
void start()
{
m_runTimer.async_wait(boost::bind(&TimeBomb::executeStepFunction, shared_from_this(), _1));
}
void stop()
{
m_runTimer.cancel();
}
~TimeBomb()
{
m_active = false;
m_runTimer.cancel();
cout << "Bomb defused #"<< hex << this << endl;
}
void executeStepFunction(const boost::system::error_code& error)
{
// Canceled timer
if (error == boost::asio::error::operation_aborted)
{
std::cout << "Timer aborted: " << error.message() << " #" << std::hex << this << std::endl;
return;
}
if (m_active)
{
// Schedule next step
cout << "tick .." <<endl;
m_runTimer.expires_from_now(
boost::posix_time::millisec(1000));
m_runTimer.async_wait(boost::bind(&TimeBomb::executeStepFunction, shared_from_this(), _1));
}
}
};
struct BomberMan
{
boost::asio::deadline_timer m_selfDestructTimer;
boost::shared_ptr<TimeBomb> myBomb;
BomberMan(boost::asio::io_service& ioService)
: m_selfDestructTimer(ioService)
{
cout << "BomberMan ready " << endl;
myBomb.reset( new TimeBomb(ioService) );
myBomb->start();
m_selfDestructTimer.expires_from_now(boost::posix_time::millisec(10500));
m_selfDestructTimer.async_wait(boost::bind(&BomberMan::defuseBomb, this, _1));
}
void defuseBomb(const boost::system::error_code& error)
{
cout << "Defusing TimeBomb" << endl;
myBomb->stop();
}
};
int main()
{
boost::asio::io_service m_ioService;
BomberMan* b = new BomberMan(m_ioService);
m_ioService.run();
return 0;
}
This is why you have boost::shared_ptr and boost::enable_shared_from_this. Inherit TimeBomb class from boost::enable_shared_from_this like this:
struct TimeBomb : public boost::enable_shared_from_this< TimeBomb >
{
...
}
Instantiate a shared ptr instead of bare ptr:
boost::shared_ptr< TimeBomb > myBomb;
...
myBomb.reset( new TimeBomb(ioService) );
And finally in TimeBomb use shared_from_this() instead of this to construct handlers.
m_runTimer.async_wait( boost::bind( &TimeBomb::executeStepFunction, shared_from_this(), _1));
And of course TimeBomb class should expose a cancel method, through which you cancel an async operation, not by deleting, or in this case, resetting a shared_ptr.
The shared_ptr answer from Sam Miller works because the use of a shared_ptr keeps the TimeBomb hanging around for the lifetime of BomberMan. This might be ok for you, or it might not.
A suggestion for a more complete solution would be to obtain your TimeBomb instances from a factory which you then release them back to when finished, rather than newing and deleting them explicitly (holding them as standard pointers, not shared_ptrs as you don't own them even though you are controlling the lifecycle). The factory can keep them hanging around until they are cancelled and then delete them for you. Keep Sam Miller's stop() function as is.
To implement this, derive the factory from an interface along the lines of
class ITimeBombObserver
{
public:
virtual void AllOperationsComplete(TimeBomb& TmBmb)=0;
};
Pass your factory to each TimeBomb as the ITimeBombObserver on construction, and have the cancellation of the TimeBomb call this function. The factory can clean up "used" TimeBombs each time one is created or released, or using a scheduled cleanup, or some other method, whichever seems most appropriate for your application.
Using this method your BomberMan doesn't even need to explicitly release the TimeBomb in defuseBomb() if it doesn't want, the call to stop() can auto release (although in this case you should still null the pointer as it becomes effectively unusable at this point). Whether this is a good idea or not depends on your real problem, so I'll leave it to you to decide.
For a really simple fix, how about this? (I've only included the bits you need to change)
It works because you only access stack variables on a timer cancel. You don't really need to callback the handler at all in the destructor of course, but I'm assuming your real code requires this for whatever reason.
~TimeBomb()
{
m_active = false;
executeStepFunction(boost::asio::error::interrupted);
m_runTimer.cancel();
cout << "Bomb defused #"<< hex << (int)this << endl;
}
void executeStepFunction(const boost::system::error_code& error)
{
// Canceled timer
if (error == boost::asio::error::operation_aborted)
{
return;
}
if (error == boost::asio::error::interrupted)
{
std::cout << "Timer aborted: " << error.message() << " #" << std::hex << (int)this << std::endl;
return;
}
...
I'm basically trying to derive from wfilebuf so I can both output to a file and intercept the output to print it to the console/debug window as well as illustrated here:
http://savingyoutime.wordpress.com/2009/04/21/ and/or here: http://savingyoutime.wordpress.com/2009/04/22/40/
(ancient supporting ideas here: http://www.horstmann.com/cpp/streams.txt)
I've almost got it, but I can't seem to be able to both write to the underlying file AND peek at the input.
I overrode the sync() function similar to the second example but it seems that pbase() and pptr() are always NULL unless I set a buffer with setp(...), but this seems to break the file output. The file is always empty!
My crude attempt at this is below:
class LoggerBuffer : public wfilebuf {
// Functions
public:
LoggerBuffer();
~LoggerBuffer();
void open(const wchar_t loggerFile[]);
void close();
int sync();
int_type overflow(int_type c = EOF);
void setState(int newState);
// Variables
private:
int currentState;
static const int BUFFER_SIZE = 10;
wchar_t buffer[BUFFER_SIZE];
};
class LoggerStream : public wostream {
// Functions
public:
LoggerStream();
~LoggerStream();
void open(const wchar_t loggerFile[] = 0);
void close();
void setState(int newState);
};
LoggerBuffer::LoggerBuffer() {
wfilebuf::open("NUL", wios::out); currentState = 1;
}
LoggerBuffer::~LoggerBuffer() {
wcout << "Destruction of LoggerBuffer" << endl;
}
void LoggerBuffer::open(const wchar_t loggerFile[]) {
wcout << "LoggerBuffer Opening " << loggerFile << endl;
close();
wfilebuf* temp = wfilebuf::open(loggerFile, wios::out); //ios::out | ios::app | ios::trunc
setp (buffer, buffer+(BUFFER_SIZE-1));
}
void LoggerBuffer::close() {
wfilebuf::close();
}
int LoggerBuffer::sync() {
wcout << " Syncing ";
int out_waiting = pptr() - pbase();
wcout << out_waiting << " characters!";
wcout << endl;
wcout << "pptr(): " << (unsigned int)pptr() << endl;
return wfilebuf::sync();
}
LoggerBuffer::int_type LoggerBuffer::overflow(int_type c) {
wcout << "overflow! (" << (wchar_t)c << ")" << endl;
if (c == EOF)
return EOF;
if (sync() == EOF)
return EOF;
return wfilebuf::overflow(c);
}
void LoggerBuffer::setState(int newState) {
wcout << "New buffer state = " << newState << endl;
currentState = newState;
}
LoggerStream::LoggerStream() : wostream(new LoggerBuffer), wios(0) {
}
LoggerStream::~LoggerStream() {
delete rdbuf();
}
void LoggerStream::open(const wchar_t loggerFile[]) {
wcout << "LoggerStream Opening " << loggerFile << endl;
((LoggerBuffer*)rdbuf())->open(loggerFile);
}
void LoggerStream::close() {
((LoggerBuffer*)rdbuf())->close();
}
void LoggerStream::setState(int newState) {
wcout << "New stream state = " << newState << endl;
((LoggerBuffer*)rdbuf())->setState(newState);
}
Full disclosure: I asked a question regarding something similar earlier: Simple wostream logging class (with custom stream manipulators)
I think I have solved that problem though.
Any help is greatly appreciated! Thanks!
I'd use a filtering streambuf, that does no buffering of its own, instead passing data through to a real streambuf (i.e., one that does real buffering) for each of the destinations. This should simplify your code quite a bit and let you concentrate on the parts you really care about.
I'm modifying the stock quoter example from the wustl CORBA release. The assignment is to implement a reply handler for the StockFactory class that handles calls to get_stock()
Here's my FactoryHandler implementation:
FactoryHandler_i.h:
#ifndef TAO_TUTORIALS_QUOTER_AMI_CLIENT_FACTORYHANDLER_I_H
#define TAO_TUTORIALS_QUOTER_AMI_CLIENT_FACTORYHANDLER_I_H
#include "QuoterS.h"
class Stock_Factory_Handler_i : public POA_Quoter::AMI_Stock_FactoryHandler
{
public:
Stock_Factory_Handler_i (int *response_count, ::Quoter::Stock_var& result);
void get_stock (::Quoter::Stock_ptr ami_return_val);
void get_stock_excep (::Messaging::ExceptionHolder * excep_holder);
private:
int *response_count_;
::Quoter::Stock_var& result_;
};
#endif /* TAO_TUTORIALS_QUOTER_AMI_CLIENT_HANDLER_I_H */
FactoryHandler_i.cpp:
#include "FactoryHandler_i.h"
#include "ace/streams.h"
Stock_Factory_Handler_i::
Stock_Factory_Handler_i (int *response_count, ::Quoter::Stock_var& result)
: response_count_ (response_count), result_ (result)
{
}
void
Stock_Factory_Handler_i::get_stock (::Quoter::Stock_ptr ami_return_val)
{
cout << "storing result" << endl;
result_ = ami_return_val;
(*this->response_count_)++;
}
void
Stock_Factory_Handler_i::get_stock_excep (::Messaging::ExceptionHolder * excep_holder)
{
// We ignore the exception, but this counts as a response, otherwise
// the application would not finish.
cerr << "Exception raised while getting stock"
<< endl;
(*this->response_count_)++;
}
And the client.cpp, from just before the part where changes have been made:
// ...
// Create and activate the handler...
int response_count = 0;
Single_Query_Stock_Handler_i handler_i (&response_count);
Quoter::AMI_Single_Query_StockHandler_var handler =
handler_i._this ();
// Create and activate the factory handler...
Quoter::Stock_var result;
Stock_Factory_Handler_i factory_handler_i (&response_count, result);
Quoter::AMI_Stock_FactoryHandler_var factory_handler =
factory_handler_i._this();
// Send all the requests, careful with error handling
int request_count = 0;
for (int i = 2; i != argc+1; ++i) {
try {
// Get the stock object
cout << "looking up stock symbol " << argv[i] << endl;
factory->sendc_get_stock (factory_handler.in (), argv[i]);
sleep(3); // wait for a response
cout << "converting result" << endl;
Quoter::Single_Query_Stock_var stock =
Quoter::Single_Query_Stock::_narrow (result.in ());
cout << "checking result" << endl;
CORBA::Any any;
any <<= stock;
CORBA::TypeCode_var tc = any.type();
cout << tc->kind() << endl;
if (CORBA::is_nil (stock.in ())) {
cerr << "Cannot get single query interface for <"
<< argv[i] << ">" << endl;
continue;
}
cout << "reading result" << endl;
stock->sendc_get_price_and_names (handler.in ());
request_count++;
}
catch (Quoter::Invalid_Stock_Symbol &) {
cerr << "Invalid stock symbol <"
<< argv[i] << ">" << endl;
}
}
while (response_count < 2 * request_count // multiply by 2 because both handlers increment response_count
&& orb->work_pending ()) {
orb->perform_work ();
}
// ...
When running the client, the output is:
looking up stock symbol MSFT
converting result
checking result
14
Cannot get single query interface for <MSFT>
(The 14 is the typecode for Stock, that's only for debugging)
Notably missing from the above is the "storing result" message that's supposed to be printed in the FactoryHandler's get_stock() callback method. I'm at a loss as to why, since the sendc_get_stock() method doesn't produce any (immediate) errors and is basically just a copy of the StockHandler's code, and from there it's the responsibility of the AMI/ORB interface to make the callback. But the original example (with a StockHandler only) works fine.
What am I doing wrong (and how do I fix it)?
EDIT: another bit of information: on the server side, StockFactory's get_stock() method does get called.
Sorry, I've no aswer for you. But a hint, ask your question at TOA's maling list at http://www.cs.wustl.edu/~schmidt/ACE-mail.html
HTH
I think that your problem is that work_pending returns true only if the ORB has immediate work to do, so it returns false in the time after your client sent his request and before the server sends his reply.
To validate that, simply remove the && orb->work_pending() condition from the loop, and use the version of perform_work that takes a timeout argument.