My question
In the Interactive Brokers TWS c++ API, (or in event-driven programming in general) when is it critical to change the state/mode (this is m_state in TWS API) to a state of "acknowledgement?" Depending on the state of this variable, different class methods are called. Towards the end of the definition of one of these methods, the state is changed back to an acknowledgement state, which appears to allow messages to be received by the host. Can I change m_state to something else, and skip the acknowledgement process altogether?
Some background information for those who don't know the TWS API
(feel free to ignore this section)
Interactive Brokers' TWS C++ API has tens of thousands of lines of code, so let me try to describe what I think the essential functionality is.
My programs entry point in Main.cpp instantiates a client object, and then repeatedly calls its void TestCppClient::processMessages() method over and over again in every iteration of a while loop.
On the one hand, there are a bunch of methods that get triggered by the broker whenever the broker decides to call them, and you, as a client, may or may not take advantage of the provided information.
But on the other hand, there are a bunch of methods that get triggered by my client code. Depending on the value of m_state, a different class method is triggered. In the sample code provided by IB, that is, in my TestCppClient class, most of these functions have some helpful demonstration code provided, and more to the point of my question, most of these functions have the last line setting the value of m_state to something ending in _ACK (which, I believe, is a socket programming convention that is short for "acknowledgement.")
For example, when m_state is equal to ST_REUTERSFUNDAMENTALS, TestCppClient::processMessages()'s switch statement will trigger TestCppClient::reutersFundamentals(). The last line of this method's definition is
m_state = ST_REUTERSFUNDAMENTALS_ACK;
The next time void TestCppClient::processMessages() is triggered, (and it gets triggered all the time by the while loop, if you recall) the switch statement is skipped due to there only being a break statement:
...
case ST_REUTERSFUNDAMENTALS:
reutersFundamentals();
break;
case ST_REUTERSFUNDAMENTALS_ACK:
break;
...
This switch statement code makes up the majority of the code in this function's definition, but there is a little at the end outside of the switch statement. The only code that runs in TestCppClient::procesMessages() in this case is at the very end, which is
m_pReader->checkClient();
m_osSignal.waitForSignal();
m_pReader->processMsgs();
These lines apparently handle some low-level socket programming stuff.
Reiterating my question
So, if I didn't change the state to an acknowledgement state, these last three lines wouldn't run. But would that be a bad thing? Does anyone have experience with this? Any anecdotal pieces of information?
The TestCppClient example is fairly complicated as IB/C++ applications go, but those three lines are important. m_pReader is an instance of the EReader class, which reads incoming data and produces messages. Because it runs in its own thread, it needs special handling.
The checkClient function tells the EReader to package incoming data into an EMessage and store the message in the message queue. But the application can't access the queue until the waitForSignal function returns. Afterward, processMsgs reads the EMessage and calls the appropriate callback function.
Dealing with the EReader is a pain, but once you get an application working, you can copy and paste the code into further applications.
Yes you can just remove all m_state switches. you may notice the ST_ part (start) of m_state are ultimately trigged in function nextValidId(). In processMessages() function the ST conditionals call client functions and afterwards, m_state is flipped to ACK to then break - only relevant part, if you were to use processMessages() for processing msg que is m_osSignal.waitForSignal(); errno = 0; m_pReader->processMsgs(); which can be called outside of processMessages(). Basically when using any function you may comment out all m_state switches. But still need to waitforsignal,process messages as needed. Seems m_state was just convenient for the testbed and quickly testing functions only commenting/uncommenting what you wanted to play with.
Related
In vxWorks 6.9 you can create timers, which are really just wrappers for a watchdog. You supply these guys a function pointer, a delay, and up to one parameter, and after the delay the function is called with the parameter. However, it is called in the interrupt context. This (for some reason) means you cannot call any "blocking" functions or the system literally crashes. You cannot call printf and you cannot call upon an object's public function, ie you cannot do this:
void Foo::WdCallback(Foo *foo){
foo->DoThing();
}
wdStart(wd, 16, (FUNCPTR)Foo::WdCallback, (_Vx_usr_arg_t)my_foo_ptr);
as it will also crash for reasons I don't understand.
What other way can we create a timer/timeout in vxWorks so that we can actually do something useful with the callback? One method I have seen is using a message queue - the watchdog function will call upon a message queue send function. However this means that a task must be created to dequeue that message queue somewhere else. I've also read that the watchdog callback could give a semaphore allowing a task to continue, but that means we have to create a task for every single timer-based function we want..
It looks like no matter what road we take with watchdogs, or timers, in vxWorks, we have to create an entire task just to be able to handle the watchdog callback due to the interupt context. There has to be a less ridiculous way to do this. Is there a purely C++ way to write a timer? Or a simpler vxWorks implementation?
C++ shall not be used for function being executed in an interrupt context. The watchdog here is executed in the context of the system tick interrupt.
If you want to keep C++ code, make sure that no new/delete operation will be performed and you would need to compile the code with addition flags (this should be documented in the VxWorks Programmer's Guide at the C++ section => -fno-rtti -fno-exceptions).
I have a highly performance-sensitive (read low latency requirement) C++ 17 class for logging that has member functions that can either log locally or can log remotely depending upon the flags with which the class is implemented. "Remote Logging" or "Local Logging" functionality is fully defined at the time when the object is constructed.
The code looks something like this
class Logger {
public:
Logger(bool aIsTx):isTx_(aIsTx) {init();}
~Logger() {}
uint16_t fbLog(const fileId_t aId, const void *aData, const uint16_t aSz){
if (isTx_)
// do remote logging
return remoteLog(aId, aData, aSz);
else
// do local logging
return fwrite(aData, aSz, 1,fd_[aId]);
}
protected:
bool isTx_
}
What I would like to do is
Some way of removing the if(isTx_) such that the code to be used gets defined at the time of instantiating.
Since the class objects are used by multiple other modules, I would not like to templatize the class because this will require me to wrap two templatized implementations of the class in an interface wrapper which will result in v-table call every time a member function is called.
You cannot "templetize" the behaviour, since you want the choice to be done at runtime.
In case you want to get rid of the if because of performance, rest assured that it will have negligible impact compared to disk access or network communication. Same goes for virtual function call.
If you need low latency, I recommend considering asynchronous logging: The main thread would simply copy the message into an internal buffer. Memory is way faster than disk or network, so there will be much less latency. You can then have a separate service thread that waits for the buffer to receive messages, and handles the slow communication.
As a bonus, you don't need branches or virtual functions in the main thread since it is the service thread that decides what to do with the messages.
Asynchronisity is not an easy approach however. There are many cases that must be taken into consideration:
How to synchronise the access to the buffer (I suggest trying out a lock free queue instead).
How much memory should the buffer be allowed to occupy? Without limit it can consume too much if the program logs faster than can be written.
If the buffer limit is reached, what should the main thread do? It either needs to fall back to synchronously waiting while the buffer is being processed or messages need to be discarded.
How to flush the buffer when the program crashes? If it is not possible, then the last messages may be lost - which probably are what you need to figure out why the program crashed in the first place.
Regardless of choice: If performance is critical, then try out multiple approaches and measure.
I'm writing a library with a C (not C++) interface that contains an event loop, call it processEvents. This should be called in a loop, and invokes user-defined callbacks when something has happened. The "something" in this case is triggered by an RPC response that is received in a different thread, and added to an event queue which is consumed by processEvents on the main thread.
So from the point of view of the user of my library, the usage looks like this:
function myCallback(void *userData) {
// ...
}
int main() {
setCallback(&myCallback, NULL);
requestCallback();
while (true) {
processEvents(); /* Eventually calls myCallback, but not immediately. */
doSomeOtherStuff();
}
}
Now I want to test, using Google Test and Google Mock, that the callback is indeed called.
I've used MockFunction<void()> to intercept the actual callback; this is called by a C-style static function that casts the void *userData to a MockFunction<void()> * and calls it. This works fine.
The trouble is: the callback isn't necessarily happen on the first call of processEvents; all I know is that it happens eventually if we keep calling processEvents in a loop.
So I guess I need something like this:
while (!testing::Mock::AllExpectationsSatisfied() && !timedOut()) {
processEvents();
}
But this fictional AllExpectationsSatisfied doesn't seem to exist. The closest I can find is VerifyAndClearExpectations, but it makes the test fail immediately if the expectations aren't met on the first try (and clears them, to boot).
Of course I make this loop run for a full second or so, which would make the test green, but also make it needlessly slow.
Does anyone know a better solution?
If you are looking for efficient synchronization between threads, check out std::condition_variable. Until a next event comes in, your implementation with a while loop will keep on spinning – using up CPU resources doing nothing useful.
Instead, it would make better sense to suspend the execution of your code, freeing up processing time for other threads, until an event comes in, and then signal to the suspended thread to resume its work. Condition variables do just that. For more information, check out the docs.
Furthermore, you might be interested in looking into std::future and std::promise, which basically encapsulate the pattern of waiting for something to come asynchronously. Find more details here.
After posting the question, I thought of using a counter that is decremented by each mock function invocation. But #PetrMánek's answer gave me a better idea. I ended up doing something like this:
MockFunction<void()> myMockFunction;
// Machinery to wire callback to invoke myMockFunction...
Semaphore semaphore; // Implementation from https://stackoverflow.com/a/4793662/14637
EXPECT_CALL(myMockFunction, Call())
.WillRepeatedly(Invoke(&semaphore, &Semaphore::notify));
do {
processEvents();
} while (semaphore.try_wait());
(I'm using a semaphore rather than std::condition_variable because (1) spurious wakeups and (2) it can be used in case I expect multiple callback invocations.)
Of course this still needs an overall timeout so a failing test won't hang forever. An optional timeout could also be added to try_wait() to make this more CPU-efficient. These improvements are left as an exercise to the reader ;)
Currently, I am using usart_read_buffer_job function provided by ASF library. I placed this function inside the while(1) loop as below:
int main()
{
Some pieces of code for initialization;
while(1)
{
usart_read_buffer_job();
while(1) // The second infinite loop
{
Some other pieces of code;
}
}
}
It works perfectly well for the first interrupt handler call. However, after returning from the handler, I was no longer able to call the interrupt handler. The program kept running within the second infinite loop and was not able to execute usart_read_buffer_job() again. It was probably the cause of the handler 's malfunction.
In this case, my purpose is to jump into the USART interrupt handler regardless of the number of infinite loops being executed in main(). Of course, by not using ASF, the issue could be solved by manually set the handler but I still wonder how this issue could be solved by other functions provided by ASF.
Looking forward to getting the response from the community soon.
Thank you,
Thanks for very quick response.
The code which I am working on is confidentials. Hence, I could only share the ASF library functions with you and explain briefly how they work.
In the ASF, typically, we have two functions for handling the interrupt, namely usart_read_buffer_job and usart_read_job
Before using these two functions, the handler calls are defined by the two functions:
usart_register_callback: Registers a callback function, which is implemented by the user.
usart_enable_callback: The callback function will be called from the interrupt handler when the conditions for the callback type are met.
And these two functions above are placed in the initialization code as shown in the question.
Then, depending on the design purpose, handlers are called whenever a character or a group of characters are received via UART peripherals using usart_read_buffer_job/usart_read_job respectively.
usart_read_buffer_job: Sets up the driver to read from the USART to a given buffer. If registered and enabled, a callback function will be called.
usart_read_job: Sets up the driver to read data from the USART module to the data pointer given. If registered and enabled, a callback will be called when the receiving is completed.
You could find more details about these functions on http://asf.atmel.com/docs/latest/samd21/html/group__asfdoc__sam0__sercom__usart__group.html
In this case, assumming that the main program stalls due to some unexpected infinite loops, the handlers should still work at anytime after receiving command invoked from the UART peripherals and do some important tasks to solve out the problems, for example.
Hope that this explanation makes my previous question clearer. And, hope to get response from all of you soon.
First of all, do not put an infinite loop inside an infinite loop!!.
If you find yourself doing this, this indicates a probable design flow. Please revise your design.
(Let's call it the first rule)
Second, you seem to use event driven I/O (rather than polling) by registering a handler/callback.
Here is a second rule, you never call a handler yourself.
You register a callback function (handler) to be called when the event occurs.
If you are doing the initialization and configuration correctly, the code should work following this scheme:
void initialization()
{
/*Device and other initialization*/
...
usart_register_callback(...); /*Register usart_read_callback() here*/
usart_enable_callback(...);
}
int main()
{
initialization();
while(1)
{
/*Some other pieces of code*/
}
}
void usart_read_callback(...)
{
usart_write_buffer_job(...); /*Read data into your read data buffer*/
}
usart_read_buffer_job() will only invoke the callback one time, so after the callback has been dealt with, you must invoke usart_read_buffer_job() again (perhaps at the end of the callback if the processing is finished).
Only one infinite loop can run unless you have some kind of separate tasks (such as in FreeRTOS), each with their own loop.
Add to the above question the concept of a wait/no wait indicator as a parameter to a ReadMessage function in a TCP/IP or UDP environment.
A third party function description states that:
This function is used to read a message from a queue which was defined by a previous registerforinput call. The input wait/no wait indicator will determine if this function will block on the queue specified, waiting for the data to be placed on the queue. If the nowait option is specified and no data is available a NULL pointer will be returned to the caller. When data available this function will return a pointer to the data read from the queue.
What does it mean for a function to be blocking or non-blocking?
Blocking means that execution of your code (in that thread) will stop for the duration of the call. Essentially, the function call will not return until the blocking operation is complete.
A blocking read will wait until there is data available (or a timeout, if any, expires), and then returns from the function call. A non-blocking read will (or at least should) always return immediately, but it might not return any data, if none is available at the moment.
An analogy if you'll permit me - sorry, it's late in the afternoon and I'm in the mood, if it gets down voted - ah well...
You want to get into a snazzy nightclub, but the bouncer tells you you cannot go in till someone comes out. You are effectively "blocked" on that condition. When someone comes out, you are free to go in - or some error condition such as "are those trainers?" Your night doesn't really kick off till you get in, your enjoyment is "blocked".
In a "non-blocking" scenario, you will tell the bouncer your phone number, and he will call you back when there is a free slot. So now you can do something else while waiting for someone to come out, you can start your night somewhere else and come back when called and continue there...
Sorry if that didn't help...
Take a look at this: http://www.scottklement.com/rpg/socktut/nonblocking.html
Here's some excerpts from it:
'By default, TCP sockets are in "blocking" mode. For example, when you call recv() to read from a stream, control isn't returned to your program until at least one byte of data is read from the remote site. This process of waiting for data to appear is referred to as "blocking".'
'Its possible to set a descriptor so that it is placed in "non-blocking" mode. When placed in non-blocking mode, you never wait for an operation to complete. This is an invaluable tool if you need to switch between many different connected sockets, and want to ensure that none of them cause the program to "lock up."'
Also, it's generally a good idea to try to search for an answer first (just type "blocking vs. non-blocking read" in a search engine), and then once you hit a wall there to come and ask questions that you couldn't find an answer to. The link I shared above was the second search result. Take a look at this great essay on what to do before asking questions on internet forums: http://www.catb.org/~esr/faqs/smart-questions.html#before
In your case, it means the function will not return until there actually is a message to return. It'll prevent your program from moving forward, but when it does move forward you'll have a message to work with.
If you specify nowait, a null pointer will be returned immediately if there are no messages on the queue, which allows you to process that situation.