Are virtual functions safe in real-time audio programming? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Real-time audio programming has particular constraints, due to the need to avoid audio glitches. Specifically, allocating and deallocating memory, or otherwise interacting with the operating system, should not be done in the audio thread.
When calling a virtual function, the program must find the relevant virtual table, lookup the pointer, and then call the function from the pointer. Is this process real-time safe?

Yes, it's fine. Virtual function dispatch is just like writing (*(obj->vtable[5]))(obj, args...). It doesn't involve any operations of unknown or possibly surprising complexity like allocating memory or I/O.

A real-time system is not defined by the programming language, but rather the OS/hardware.
So long as the system is real-time, and the software executing is deterministic, you will have real-time performance. In regards to your question, the use of virtual functions does not violate determinism.
Another concern might be latency. The amount of latency that you might encounter will be determined by the OS, the hardware, and the software, but as Matt Timmermans mentioned in his answer, virtual functions cause little overhead and will not contribute significantly to latency.

Related

Memory and performance in C++ Game [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm still new to C++ but I catch on quick and also have experience in C#. Something I want to know is, what are some performance safe actions I can take to ensure that my game runs efficiently. Also, in which scenario am I likely to run out of memory on. 2k to 3k bullet objects on the stack, or heap? I think the stack is generally faster, but I heard that too much causes a stack overflow. That being said, how much is too much exactly?
Sorry for the plethora of questions, I just want to make sure I don't design a game engine in which it relies on good PCs in order to run well.
Firstly, program your game safely and only worry about optimizations like memory layout after profiling & debugging.
That being said, I have to dispel the myth that the stack is faster than the heap. What matters is cache performance.
The stack generally is faster for small quick accesses, because the stack usually already is in the cache. But when you are iterating over thousands of bullet objects on the heap, as long as you store them contiguously (e.g. std::vector, not std::list), everything should be loaded into the cache, and there should be no performance difference.

C++ extincting features [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Are there some low level features of C++ that are growing extinct because of the increasing availability of computer resources like memory, disk space, CPU power etc?
Not per-se a C++ feature (it's common to C), but the register specification doesn't do much any more. This used to be a recommendation for the compiler to generate instructions to place some variable in a register, but it's not really useful anymore. When I learned C, the chapter on loops used to be full of
for(register int i ...)
Compilers develop, but the language as such is likely to stay the same (at least old language standards will remain), because otherwise, old code will break.
The inline keyword is no longer meaning "inline this function", but has some semantics based on multiple declarations of the same function in the final binary (there will only be ONE function, rather than multiple functions).
This is an effect of compiler being more clever as to when to inline (most modern compilers will for example inline any function that is static and called only once, regardless of size)
Obviusly, with more hardware resources, the solution may change - if you can write something in Python and it's reasonably speedy, why write it in C or C++? 20 years ago, that project may not even have been possible with hand-written assembler...
Bitfields are often pointless nowadays. They typically save only a few bytes per object, but accessing them is slower. So if you're not running out of memory, your program is probably faster without them. (They make a lot of sense when they can prevent your program from swapping to disk; disk is 100x slower than RAM)

Disadvantage of using Polymorphism (Technical) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know the main advantages of polymorphism which are
It helps the programmers to reuse the code and classes once written,
tested and implemented. They can be reused in many cases.
Single variable can be used to store multiple data types.
It reduces coupling.
But when I searched for its disadvantages I got answers like
It's esoteric. Not very easy for the beginner to just pick up and go
with it. Rather it takes often years of dedication before
abstraction becomes second nature.
What I want to know is whether there are any technical disadvantages to using polymorphism?
Below are few technical Disadvantages.
Run time polymorphism makes performance issue as it needs to decide
at run time so it degrade the performance if there are lot of virtual
functions.
4 bytes (it can be different practically) of vptr (virtual pointer) and overhead of look-up table.
Virtual method calls (dynamic dispatch) have a slight run-time penalty, as it needs to resolve the function to be called at the time of the call. In general, this performance penalty is nothing to be worried about. However, I did some testing a couple of years back; you may experience noticeable slowdowns if you're making a lot of virtual calls and these resolve to a different function each time. This is because it messes with the CPU's branch prediction.

Bankers Algorithm with realtime process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How can we give a process in taskmanager (like notepad.exe) as an input as process for my Bankers Algorithm (Deadlock detection) ???
It's going to be hard and probably unfeasible to keep track of all the OS / external conditions to implement a real deadlock-prevention algorithm on a real application. Modern OSes (when we're not talking about RT-aware systems) prefer not to implement such algorithms due to their overwhelming complexity and expensiveness.
In other terms you can get away from a Windows deadlock, in the worst case, with a simple reboot. And given how many times this happens it isn't deemed a huge problem in the desktop OSes market.
Thus I recommend to write a simple test case with a dummy application that will either
Serve your purpose
Allow you to know exactly what's being used by your application and let you manage the complexity
As a sidenote: applications like notepad.exe or similar are not real-time processes even if you give them "Real Time" priority in the Windows task manager (and not even soft-real time). Real real-time processes have time constraints (i.e. deadlines) that they MUST observe. This isn't true in any desktop OS since they're just built with a different concept in mind (time sharing). Linux has some RT patches (e.g. Xenomai) to render the scheduling algorithm in the kernel a real real-time one, but I'm not aware of the status of that patch right now.

Considering the Chaos Monkey in Designing and Architecting an Embedded Systems [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I work on embedded systems with limited memory and throughput. Making a system more robust requires more memory, and time on the processor. I like the idea of the Chaos Monkey to figure out if your system is fault tolerant, however with limited resources I'm not sure how feasible it is to just keep adding code to it. Are there certain design considerations whether in the architecture or otherwise that would improve the fault handling capabilities, without necessarily adding "more code" to a system?
One way I have seen to help in preventing writing an if then statement in c (or c++) that assigns rather then compares a static value, recommends writing that on the left hand side of the comparison, this way if you try to assign your variable to say the number 5, things will complain and you're likely to find any issues right away.
Are there Architectural or Design decisions that can be made early on that prevent possible redundancy/reliability issues in a similar way?
Yes many other techniques can be used. You'd do well to purchase and read "Code Complete".
Code Complete on Amazon