I would like to check the difference between using sc_buffer and sc_signal. I have coded a module which adds two random numbers and then I run two tests in parallel: one using sc_buffer and the other using sc_signal. Nevertheless, when I check with gtkwave I see the same traces for both examples, so I think for this case there should not be any difference. How can I check the difference? or is it that these two different types of channel are intended for different applications?
sc_buffer is probably most useful when modeling at an abstract level.
For example, consider modeling a serial communication channel. The transmitter could send the same character twice in a row. If an sc_signal was used as the channel, the receiver wouldn't detect the second character, but with an sc_buffer, it would.
#include <systemc>
#include <iostream>
using namespace sc_core;
using namespace std;
struct Transmitter : public sc_module {
sc_out<char> out;
Transmitter(sc_module_name name) : sc_module(name) {
SC_THREAD(transmit);
}
void transmit() {
wait(1, SC_NS);
out.write('x');
wait(1, SC_NS);
out.write('x');
wait(1, SC_NS);
out.write('y');
};
SC_HAS_PROCESS(Transmitter);
};
struct Receiver : public sc_module {
sc_in<char> in;
Receiver(sc_module_name name) : sc_module(name) {
SC_METHOD(receive);
sensitive << in;
dont_initialize();
}
void receive() {
cout << sc_time_stamp() << ": " << name() << " received "
<< in.read() << endl;
}
SC_HAS_PROCESS(Receiver);
};
int sc_main(int argc, char* argv[])
{
sc_signal<char> signal;
sc_buffer<char> buffer;
Transmitter signal_transmitter("signal_transmitter");
Receiver signal_receiver("signal_receiver");
Transmitter buffer_transmitter("buffer_transmitter");
Receiver buffer_receiver("buffer_receiver");
signal_transmitter.out(signal);
signal_receiver.in(signal);
buffer_transmitter.out(buffer);
buffer_receiver.in(buffer);
sc_start();
return 0;
}
The above example produces this output:
1 ns: signal_receiver received x
1 ns: buffer_receiver received x
2 ns: buffer_receiver received x
3 ns: signal_receiver received y
3 ns: buffer_receiver received y
Notice that signal_receiver didn't detect the character sent at 2 ns.
You won't see any difference in a VCD trace, because the values stored in the sc_buffer and sc_signal channels are identical. The difference is when the receiver is triggered.
You can look at the answer below for the differences between sc_buffer and sc_signal.
In SystemC, can the sc_signal_in/out type port be bound to the primary channel sc_buffer?
sc_buffer is basically derived from sc_signal and it re-implements the write and update functions to generate notification for every change.
So if your generate new numbers which are same as the previous number written into the channel and you are dumping some output at each event notification you should see some difference.
Related
From a source I am getting stream data which size will not be known before the final processing, but the minimum is 10 GB. I have to send this large amount of data using gRPC.
Need to mention here, this large amount data will be passed through the gRPC while the processing of the streaming is done. In this step, I have thought to store all the value in a vector.
Regarding sending large amount of data I have tried to get idea and found:
This where it is mentioned not to pass large data using gRPC. Here, mentioned to use any other message protocol where I have limitation to use something else rather than gRPC(at least till today).
From this post I have tried to know how chunk message can be sent but I am not sure is it related to my problem or not.
First post where I have found a blog to stream data using go language.
This one the presentation using python language of this post. But it is also incomplete.
gRPC example could be a good start bt cannot decode due to lack of C++ knowledge
From there, a huge Update I have done in the question. But the main theme of the question is not changed
What I have done so far and some points about my project. The github repo is available here.
A Unary rpc is present in the project
I know that my new Bi directional rpc will take some time. I want that the Unary rpc will not wait for the completion of the Bi directional rpc. Right now I am thinking in a synchronous way where Unary rpc is waiting to pass it's status for the streaming one completion.
I am avoiding the unnecessary lines in C++ code. But giving whole proto files
big_data.proto
syntax = "proto3";
package demo_grpc;
message Large_Data {
repeated int32 large_data_collection = 1 [packed=true];
int32 data_chunk_number = 2;
}
addressbook.proto
syntax = "proto3";
package demo_grpc;
import "myproto/big_data.proto";
message S_Response {
string name = 1;
string street = 2;
string zip = 3;
string city = 4;
string country = 5;
int32 double_init_val = 6;
}
message C_Request {
uint32 choose_area = 1;
string name = 2;
int32 init_val = 3;
}
service AddressBook {
rpc GetAddress(C_Request) returns (S_Response) {}
rpc Stream_Chunk_Service(stream Large_Data) returns (stream Large_Data) {}
}
client.cpp
#include <big_data.pb.h>
#include <addressbook.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <grpcpp/create_channel.h>
#include <iostream>
#include <numeric>
using namespace std;
// This function prompts the user to set value for the required area
void Client_Request(demo_grpc::C_Request &request_)
{
// do processing for unary rpc. Intentionally avoided here
}
// According to Client Request this function display the value of protobuf message
void Server_Response(demo_grpc::C_Request &request_, const demo_grpc::S_Response &response_)
{
// do processing for unary rpc. Intentionally avoided here
}
// following function make large vector and then chunk to send via stream from client to server
void Stream_Data_Chunk_Request(demo_grpc::Large_Data &request_,
demo_grpc::Large_Data &response_,
uint64_t preferred_chunk_size_in_kibyte)
{
// A dummy vector which in real case will be the large data set's container
std::vector<int32_t> large_vector;
// irerate it now for 1024*10 times
for(int64_t i = 0; i < 1024 * 10; i++)
{
large_vector.push_back(1);
}
uint64_t preferred_chunk_size_in_kibyte_holds_integer_num = 0; // 1 chunk how many intger will contain that num will come here
// total chunk number will be updated here
uint32_t total_chunk = total_chunk_counter(large_vector.size(), preferred_chunk_size_in_kibyte, preferred_chunk_size_in_kibyte_holds_integer_num);
// A temp counter to trace the index of the large_vector
int32_t temp_count = 0;
// loop will start if the total num of chunk is greater than 0. After each iteration total_chunk will be decremented
while(total_chunk > 0)
{
for (int64_t i = temp_count * preferred_chunk_size_in_kibyte_holds_integer_num; i < preferred_chunk_size_in_kibyte_holds_integer_num + temp_count * preferred_chunk_size_in_kibyte_holds_integer_num; i++)
{
// the repeated field large_data_collection is taking value from the large_vector
request_.add_large_data_collection(large_vector[i]);
}
temp_count++;
total_chunk--;
std::string ip_address = "localhost:50051";
auto channel = grpc::CreateChannel(ip_address, grpc::InsecureChannelCredentials());
std::unique_ptr<demo_grpc::AddressBook::Stub> stub = demo_grpc::AddressBook::NewStub(channel);
grpc::ClientContext context;
std::shared_ptr<::grpc::ClientReaderWriter< ::demo_grpc::Large_Data, ::demo_grpc::Large_Data> > stream(stub->Stream_Chunk_Service(&context));
// While the size of each chunk is eached then this repeated field is cleared. I am not sure before this
// value can be transfered to server or not. But my assumption is saying that it should be done
request_.clear_large_data_collection();
}
}
int main(int argc, char* argv[])
{
std::string client_address = "localhost:50051";
std::cout << "Address of client: " << client_address << std::endl;
// The following part for the Unary RPC
demo_grpc::C_Request query;
demo_grpc::S_Response result;
Client_Request(query);
// This part for the streaming chunk data (Bi directional Stream RPC)
demo_grpc::Large_Data stream_chunk_request_;
demo_grpc::Large_Data stream_chunk_response_;
uint64_t preferred_chunk_size_in_kibyte = 64;
Stream_Data_Chunk_Request(stream_chunk_request_, stream_chunk_response_, preferred_chunk_size_in_kibyte);
// Call
auto channel = grpc::CreateChannel(client_address, grpc::InsecureChannelCredentials());
std::unique_ptr<demo_grpc::AddressBook::Stub> stub = demo_grpc::AddressBook::NewStub(channel);
grpc::ClientContext context;
grpc::Status status = stub->GetAddress(&context, query, &result);
// the following status is for unary rpc as far I have understood the structure
if (status.ok())
{
Server_Response(query, result);
}
else
{
std::cout << status.error_message() << std::endl;
}
return 0;
}
heper function total_chunk_counter
#include <cmath>
uint32_t total_chunk_counter(uint64_t num_of_container_content,
uint64_t preferred_chunk_size_in_kibyte,
uint64_t &preferred_chunk_size_in_kibyte_holds_integer_num)
{
uint64_t cotainer_size_in_kibyte = (32ULL * num_of_container_content) / 1024;
preferred_chunk_size_in_kibyte_holds_integer_num = (num_of_container_content * preferred_chunk_size_in_kibyte) / cotainer_size_in_kibyte;
float total_chunk = static_cast<float>(num_of_container_content) / preferred_chunk_size_in_kibyte_holds_integer_num;
return std::ceil(total_chunk);
}
server.cpp which is totally incomplete
#include <myproto/big_data.pb.h>
#include <myproto/addressbook.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <grpcpp/server_builder.h>
#include <iostream>
class AddressBookService final : public demo_grpc::AddressBook::Service {
public:
virtual ::grpc::Status GetAddress(::grpc::ServerContext* context, const ::demo_grpc::C_Request* request, ::demo_grpc::S_Response* response)
{
switch (request->choose_area())
{
// do processing for unary rpc. Intentionally avoided here
std::cout << "Information of " << request->choose_area() << " is sent to Client" << std::endl;
return grpc::Status::OK;
}
// Bi-directional streaming chunk data
virtual ::grpc::Status Stream_Chunk_Service(::grpc::ServerContext* context, ::grpc::ServerReaderWriter< ::demo_grpc::Large_Data, ::demo_grpc::Large_Data>* stream)
{
// stream->Large_Data;
return grpc::Status::OK;
}
};
void RunServer()
{
std::cout << "grpc Version: " << grpc::Version() << std::endl;
std::string server_address = "localhost:50051";
std::cout << "Address of server: " << server_address << std::endl;
grpc::ServerBuilder builder;
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
AddressBookService my_service;
builder.RegisterService(&my_service);
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
server->Wait();
}
int main(int argc, char* argv[])
{
RunServer();
return 0;
}
In summary my desire
I need to pass the content of large_vector with the repeated field large_data_collection of message Large_Data. I should chunk the size of the large_vector and populate the repeated field large_data_collection with that chunk size
In server side all chunk will be concatenate by keeping the exact order of the large_vector. Some processing will be done on them (eg: double the value of each index). Then again whole data will be sent to the client as a chunk stream
Would be great if the present unary rpc don't wait for the completion of the bi-directional rpc
Solution with example would be really helpful. Advance thanks. The github repo is available here.
I have a Node.js application that I want to be able to send a JSON-object into a C++ application.
The C++ application will use the Poco-libraries (pocoproject.org).
I want the interaction to be lighting fast, so preferably no files or network-sockets.
I have been looking into these areas:
Pipes
Shared memory
unixSockets
What should I focus on, and can someone point my direction to docs. and samples?
First of all, some more data is needed to give good advice.
In general shared memory is the fastest, since there's no transfer required, but it's also the hardest to keep fine. I'm not sure you'd be able to do that with Node though.
If this program is just running for this one task and closing it might be worth just sending your JSON to the CPP program as a startup param
myCPPProgram.exe "JsonDataHere"
The simplest thing with decent performance should be a socket connection using Unix domain sockets with some low-overhead data frame format. E.g., two-byte length followed by UTF-8 encoded JSON. On the C++ side this should be easy to implement using the Poco::Net::TCPServer framework. Depending on where your application will go in the future you may run into limits of this format, but if it's basically just streaming JSON objects it should be fine.
To make it even simpler, you can use a WebSocket, which will take care of the framing for you, at the cost of the overhead for the initial connection setup (HTTP upgrade request). May even be possible to run the WebSocket protocol over a Unix domain socket.
However, the performance difference between a (localhost only) TCP socket and a Unix domain socket may not even be significant, given all the JavaScript/node.js overhead. Also, if performance is really a concern, JSON may not even be the right serialization format to begin with.
Anyway, without more detailed information (size of JSON data, message frequency) it's hard to give a definite recommendation.
I created a TCPServer, which seems to work. However if I close the server and start it again I get this error:
Net Exception: Address already in use: /tmp/app.SocketTest
Is it not possible to re-attach to the socket if it exists?
Here is the code for the TCPServer:
#include "Poco/Util/ServerApplication.h"
#include "Poco/Net/TCPServer.h"
#include "Poco/Net/TCPServerConnection.h"
#include "Poco/Net/TCPServerConnectionFactory.h"
#include "Poco/Util/Option.h"
#include "Poco/Util/OptionSet.h"
#include "Poco/Util/HelpFormatter.h"
#include "Poco/Net/StreamSocket.h"
#include "Poco/Net/ServerSocket.h"
#include "Poco/Net/SocketAddress.h"
#include "Poco/File.h"
#include <fstream>
#include <iostream>
using Poco::Net::ServerSocket;
using Poco::Net::StreamSocket;
using Poco::Net::TCPServer;
using Poco::Net::TCPServerConnection;
using Poco::Net::TCPServerConnectionFactory;
using Poco::Net::SocketAddress;
using Poco::Util::ServerApplication;
using Poco::Util::Option;
using Poco::Util::OptionSet;
using Poco::Util::HelpFormatter;
class UnixSocketServerConnection: public TCPServerConnection
/// This class handles all client connections.
{
public:
UnixSocketServerConnection(const StreamSocket& s):
TCPServerConnection(s)
{
}
void run()
{
try
{
/*char buffer[1024];
int n = 1;
while (n > 0)
{
n = socket().receiveBytes(buffer, sizeof(buffer));
EchoBack(buffer);
}*/
std::string message;
char buffer[1024];
int n = 1;
while (n > 0)
{
n = socket().receiveBytes(buffer, sizeof(buffer));
buffer[n] = '\0';
message += buffer;
if(sizeof(buffer) > n && message != "")
{
EchoBack(message);
message = "";
}
}
}
catch (Poco::Exception& exc)
{
std::cerr << "Error: " << exc.displayText() << std::endl;
}
std::cout << "Disconnected." << std::endl;
}
private:
inline void EchoBack(std::string message)
{
std::cout << "Message: " << message << std::endl;
socket().sendBytes(message.data(), message.length());
}
};
class UnixSocketServerConnectionFactory: public TCPServerConnectionFactory
/// A factory
{
public:
UnixSocketServerConnectionFactory()
{
}
TCPServerConnection* createConnection(const StreamSocket& socket)
{
std::cout << "Got new connection." << std::endl;
return new UnixSocketServerConnection(socket);
}
private:
};
class UnixSocketServer: public Poco::Util::ServerApplication
/// The main application class.
{
public:
UnixSocketServer(): _helpRequested(false)
{
}
~UnixSocketServer()
{
}
protected:
void initialize(Application& self)
{
loadConfiguration(); // load default configuration files, if present
ServerApplication::initialize(self);
}
void uninitialize()
{
ServerApplication::uninitialize();
}
void defineOptions(OptionSet& options)
{
ServerApplication::defineOptions(options);
options.addOption(
Option("help", "h", "display help information on command line arguments")
.required(false)
.repeatable(false));
}
void handleOption(const std::string& name, const std::string& value)
{
ServerApplication::handleOption(name, value);
if (name == "help")
_helpRequested = true;
}
void displayHelp()
{
HelpFormatter helpFormatter(options());
helpFormatter.setCommand(commandName());
helpFormatter.setUsage("OPTIONS");
helpFormatter.setHeader("A server application to test unix domain sockets.");
helpFormatter.format(std::cout);
}
int main(const std::vector<std::string>& args)
{
if (_helpRequested)
{
displayHelp();
}
else
{
// set-up unix domain socket
Poco::File socketFile("/tmp/app.SocketTest");
SocketAddress unixSocket(SocketAddress::UNIX_LOCAL, socketFile.path());
// set-up a server socket
ServerSocket svs(unixSocket);
// set-up a TCPServer instance
TCPServer srv(new UnixSocketServerConnectionFactory, svs);
// start the TCPServer
srv.start();
// wait for CTRL-C or kill
waitForTerminationRequest();
// Stop the TCPServer
srv.stop();
}
return Application::EXIT_OK;
}
private:
bool _helpRequested;
};
int main(int argc, char **argv) {
UnixSocketServer app;
return app.run(argc, argv);
}
The solution I have gone for, is to use unix domain sockets. The solution will run on a Raspbian-setup and the socket-file is placed in /dev/shm, which is mounted into RAM.
On the C++ side, I use the Poco::Net::TCPServer framework as described elsewhere in this post.
On the Node.js side, I use the node-ipc module (http://riaevangelist.github.io/node-ipc/).
I'm having a problem with a const request with the google protocol buffers using grpc. Here is my problem:
I would like to make an in-place modification of an array's value. For that I wrote this simple example where I try to pass an array and sum all of it's content. Here's my code:
adder.proto:
syntax = "proto3";
option java_package = "io.grpc.examples";
package adder;
// The greeter service definition.
service Adder {
// Sends a greeting
rpc Add (AdderRequest) returns (AdderReply) {}
}
// The request message containing the user's name.
message AdderRequest {
repeated int32 values = 1;
}
// The response message containing the greetings
message AdderReply {
int32 sum = 1;
}
server.cc:
//
// Created by Eric Reis on 7/6/16.
//
#include <iostream>
#include <grpc++/grpc++.h>
#include "adder.grpc.pb.h"
class AdderImpl final : public adder::Adder::Service
{
public:
grpc::Status Add(grpc::ServerContext* context, const adder::AdderRequest* request,
adder::AdderReply* reply) override
{
int sum = 0;
for(int i = 0, sz = request->values_size(); i < sz; i++)
{
request->set_values(i, 10); // -> this gives an error caused by the const declaration of the request variable
// error: "Non-const function 'set_values' is called on the const object"
sum += request->values(i); // -> this works fine
}
reply->set_sum(sum);
return grpc::Status::OK;
}
};
void RunServer()
{
std::string server_address("0.0.0.0:50051");
AdderImpl service;
grpc::ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *synchronous* service.
builder.RegisterService(&service);
// Finally assemble the server.
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
std::cout << "Server listening on " << server_address << std::endl;
// Wait for the server to shutdown. Note that some other thread must be
// responsible for shutting down the server for this call to ever return.
server->Wait();
}
int main(int argc, char** argv)
{
RunServer();
return 0;
}
client.cc:
//
// Created by Eric Reis on 7/6/16.
//
#include <iostream>
#include <grpc++/grpc++.h>
#include "adder.grpc.pb.h"
class AdderClient
{
public:
AdderClient(std::shared_ptr<grpc::Channel> channel) : stub_(adder::Adder::NewStub(channel)) {}
int Add(int* values, int sz) {
// Data we are sending to the server.
adder::AdderRequest request;
for (int i = 0; i < sz; i++)
{
request.add_values(values[i]);
}
// Container for the data we expect from the server.
adder::AdderReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
grpc::ClientContext context;
// The actual RPC.
grpc::Status status = stub_->Add(&context, request, &reply);
// Act upon its status.
if (status.ok())
{
return reply.sum();
}
else {
std::cout << "RPC failed" << std::endl;
return -1;
}
}
private:
std::unique_ptr<adder::Adder::Stub> stub_;
};
int main(int argc, char** argv) {
// Instantiate the client. It requires a channel, out of which the actual RPCs
// are created. This channel models a connection to an endpoint (in this case,
// localhost at port 50051). We indicate that the channel isn't authenticated
// (use of InsecureChannelCredentials()).
AdderClient adder(grpc::CreateChannel("localhost:50051",
grpc::InsecureChannelCredentials()));
int values[] = {1,2};
int sum = adder.Add(values, 2);
std::cout << "Adder received: " << sum << std::endl;
return 0;
}
My error happens when i try to call the method set_values() on the request object that is defined as const. I understand why this error is occurring but I just can't figure out a way to overcome it without making a copy of the array.
I tried to remove the const definition but the RPC calls fails when I do that.
Since I'm new to this RPC world and even more on grpc and the google protocol buffers I'd like to call for your help. What is the best way to solve this problem?
Please see my answer here. The server receives a copy of the AdderRequest sent by the client. If you were to modify it, the client's original AdderRequest would not be modified. If by "in place" you mean the server modifies the client's original memory, no RPC technology can truly accomplish that, because the client and server run in separate address spaces (processes), even on different machines.
If you truly need the server to modify the client's memory:
Ensure the server and client run on the same machine.
Use OS-specific shared-memory APIs such as shm_open() and mmap() to map the same chunk of physical memory into the address spaces of both the client and the server.
Use RPC to transmit the identifier (name) of the shared memory (not the actual data in the memory) and to invoke the server's processing.
When both client and server have opened and mapped the memory, they both have pointers (likely with different values in the different address spaces) to the same physical memory, so the server will be able to read what the client writes there (with no copying or transmitting) and vice versa.
I have created my own OMNet++ Listener class as follows:
headerfile
#ifndef MYFRAMELISTENER_H_
#define MYFRAMELISTENER_H_
#include <clistener.h>
#include <vector>
class MyFrameListener : public cListener{
public:
int tempDelmeJustForTest;
simsignal_t signalIDArray[14];
int index;
public:
MyFrameListener();
virtual ~MyFrameListener();
virtual void receiveSignal(cComponent *source, simsignal_t signalID, cObject *obj);
};
#endif /* MYFRAMELISTENER_H_ */
cc File
MyFrameListener::MyFrameListener() {
this->tempDelmeJustForTest = 0;
}
MyFrameListener::~MyFrameListener() {
}
void MyFrameListener::receiveSignal(cComponent *source, simsignal_t signalID, cObject *obj){
tempDelmeJustForTest++;
}
SimpleModule cc file:
void ListenersModule::initialize()
{
// TODO - Generated method body
frameListener = new MyFrameListener();
//subscribe("packetReceivedFromLower",frameListener);
simulation.getSystemModule()->subscribe("packetReceivedFromLower",frameListener);
}
void ListenersModule::handleMessage(cMessage *msg)
{
// TODO - Generated method body
}
void ListenersModule::finish(){
//simulation.getSystemModule()->unsubscribe("packetReceivedFromLower",frameListener);
recordScalar("My Listened Values", this->frameListener->tempDelmeJustForTest);
}
Here, I am trying to count the number of Ethernet frames received in EtherMACFullDuplex by incrementing the tempDelmeJustForTest variable.
EtherMACFullDuplex is a module located in inet/src/inet/linklayer/ethernet/EtherMACFullDuplex.cc and it is used to create the Ethernet phy port.
This class has a function as shown below:
void EtherMACFullDuplex::processReceivedDataFrame(EtherFrame *frame)
{
emit(packetReceivedFromLowerSignal, frame);
// strip physical layer overhead (preamble, SFD) from frame
frame->setByteLength(frame->getFrameByteLength());
// statistics
unsigned long curBytes = frame->getByteLength();
numFramesReceivedOK++;
numBytesReceivedOK += curBytes;
emit(rxPkOkSignal, frame);
numFramesPassedToHL++;
emit(packetSentToUpperSignal, frame);
// pass up to upper layer
EV_INFO << "Sending " << frame << " to upper layer.\n";
send(frame, "upperLayerOut");
}
It emits a signal "packetReceivedFromLower" and my listner has subscribed to it as shown in the above code.
Problem is the counter shows tempDelmeJustForTest = 12 when the sender sends only 6 Ethernet Frames. Why ?
Also, I am project referencing Core4Inet project other than the Inet project.
I am guessing you are overhearing the signal emitted by the MAC layer of the destination host and the MAC layer of the switch.
By subscribing not just at your own module, but at the module returned by simulation.getSystemModule(), you are overhearing all signals of type packetReceivedFromLower emitted anywhere in the simulation. The "Subscribing to Signals" chapter of the user manual has more information on this mechanism.
If you want to know where the signal you are overhearing was emitted from, you can use the source parameter of your receiveSignal method.
The counter tempDelmeJustForTest shows 12 when the sender sends 6 Ethernet Frames, because the method processReceivedDataFrame is involved 6 times in your switch and 6 times in destination host.
The command:
simulation.getSystemModule()->subscribe("packetReceivedFromLower",frameListener);
means that the listener will receive the signal from any element (i.e. including switch). You can check which module sent a signal by adding one line in receiveSignal():
void MyFrameListener::receiveSignal(cComponent *source, simsignal_t signalID, cObject *obj){
EV << "Signal from " << source->getFullPath() << endl;
tempDelmeJustForTest++;
}
Problem
I have a UDPlistener application that I need to write a unit test for. This listener continuously listens on a port and is meant to always be running on the product. We use the poco libraries for frameworks not in the standard library.
Now I need to add it to the unit test application.
Curent Solution
I thought it would be easiest to implement Poco::Runnable in a class RunApp that runs the application. Then I can create a new Poco::Thread in my unit test to run the RunApp class.
This works; my listener is running and I can send test messages in the unit test body after the thread is spawned. BUT, I need to stop the listener so other unit tests can run. I added a UDP message that tells the listener to kill itself but this is only used by the unit test and a potential security problem.
Question
Is there a way to force a Poco::Thread to stop? Or I structuring this unit test wrong? I don't want the listener to run during all the other unit tests.
If instead of using a Poco::Thread you use a Poco::Task, you get a thread that can be cancelled. The following sample code (ready to run as-is) should give you an idea:
#include <Poco/Task.h>
#include <Poco/TaskManager.h>
#include <Poco/Thread.h>
#include <string>
#include <iostream>
using namespace std;
class UdpListenerTask : public Poco::Task {
public:
UdpListenerTask(const string& name) : Task(name) { }
void runTask() {
cout << name() << ": starting" << endl;
while (! isCancelled()) {
// Do some work. Cannot block indefinitely, otherwise it
// will never test the isCancelled() condition.
doSomeWork();
}
cout << endl << name() << ": cancelled " << endl;
}
private:
int doSomeWork() {
cout << "*" << flush;
// Simulate some time spent doing work
int i;
for (i = 0; i < INT32_MAX/1000; i++) { }
return i;
}
};
void runUdpProbe() {
// Simulate some time spent running the probe.
Poco::Thread::sleep(1000);
}
int main() {
Poco::TaskManager tm;
UdpListenerTask* st = new UdpListenerTask("task1");
tm.start(st); // tm takes ownership
// Run test 1.
runUdpProbe();
// Test 1 done. Cancel the UDP listener
st->cancel();
// Run all the other tests
// cleanup
tm.joinAll();
return 0;
}
The POCO slides Multithreading give examples of usage of both Poco::Thread and Poco::Task.
As an aside, a unit test should bypass the UDP communication via abstract classes and mock objects; I think this test should be called feature test :-)