I am thinking to rewrite a monolithic C/C++ application (intially it was a single C++ executable) and I am trying to design my application to be more modular. I was thinking that I could deliver all my module as dll (or .so object on a linux platform) and compose my application at runtime instead of a single executable. Also I am seeking for modularity, I am doing my best to keep in mind that speed is either important for this application. So my deisgn should be a tradeoff between modularity and performance.
This is an IOT application, aimed to collect various data depending on the geolocalisation of the vehicle.
In my current application, there are 3 components :
AntennaService : this is the main service component of my application. This module would load the others. On every moove of the vehicle, it is querying a ConfigurationDataService to as it to return the most closest geo point indexed in a flat configuration file. And when needed it would fire event to log asynchronously via the log service.
LogService : a module service that is using the pub/sub mecansim to log asynchronously data either locally or online depending on the internet connection.
ConfigurationDataService : this is a module service that could potentially be called simultanisouly by various others component that are querying it about configuration to use. This module is reading some protocol buffers file where readonly configuration data is indexed. And depending on the query criteria asked by the others module, it filters or computer the static configuration data before anserwring.
The problem that I am facing then, is I do not find the best way to model the ConfigurationDataService in order to have fast and optimal response :
should it use lock and critical sections in order to respond to the parallel queries to the others modules ? Or do you think that there could be a far better design.
Related
I am writing an application in C++ in windows, that has a UI (WxWidgets) and user normally use the application via its UI.
Now I have a new requirement, the application needs to start and controlled by another application.
I can not develop a DLL or similar solutions.
I have access to my code (apparently!) and the other applications is developed by other users, but I can give them details on how to control my application.
My question is: How can I allow other applications to control my application via a defined interface?
For simplicity assume that I developed a calculator (has UI) and I want to give other application to do math on my application (for example they may ask my application to add two numbers and so on, As the math is very time consuming, I need to inform them about progress and any error that generate during processing.
Can I open a pipe to communicate?
Any other way to achieve this?
You can use pipes or tcp/sockets with a custom protocol, but probably it's better if you split your application in two parts:
One part that does the computation
The user interface
and publish the first one as an http server responding to JSON requests.
Using a standard protocol can ease up testing and increases interoperability (you can also probably leverage already existing libraries for both implementing the server and the JSON marshalling).
Note that in addition to accepting commands, any error message you are going to show for example in a message box or any other nested event loop like dialog boxes need to be rewired properly; this can be very problematic if message or dialog box come up as the result of calls to external code that you didn't write yourself.
This is the typical change that would have costed 10 if done early and that will cost 1000 now.
We're developing both std and realtime applications that run on a RT-Linux.
question is, what would be an efficient way of logging application traces from both realtime and non-realtime processes?
By effecient, I mean that the process of logging application traces shouldn't cause RT-perf hit by increasing latency, etc.
Traces should ideally be stored into a single file with timestamp, to make it easier to track interaction between processes.
For real time Logging I'll advise use different aproaches than bare logging to files. Writing to files a lot of information will hurt your performance.
I can advice other more lighter mechanismS:
Use statistics/counters to get filling what your application is doing
Write/encode logs in some binary format to be processed offline. This binary format may be more compact and thus lighter.
Since you are on linux, you can use syslog() :
openlog() opens a connection to the system logger for a program.
this means your program forwards messages to another program, which can be of low priority.
If you want something more fancy, then boost logging.
This seems to be typical application:
1. One part of the program should scan for audio files in background and write tags to the database.
2. The other part makes search queries and shows results.
The application should be crossplatform.
So, the main search loop, including adding data to database is not a problem. The questions are:
1. What is the best way to implement this background working service? Boost(asio) or Qt(services framework?)?
2. What is the best approach, to make a native service wrapper using mentioned libraries or emulate it using non gui application?
3. Should I connect gui to the service(how they will communicate using boost or qt?) or directly to the database (could locks be there?)?
4. Will decsision in point 1 consume all CPU usage? And how to avoid that? How to implement scanning for files less cpu consumable?S
I like to use Poco which has a convenient ServerApplication class, which can be used in an application that can be run as either a normal command-line application, or as a Windows service, or as a *nix daemon without having to touch the code.
If you use a "real" database (MySQL, PostgreSQL, SQL Server), then querying the database from the GUI application is probably fine and easier to do. If you use another type of database that isn't necessarily multi-user friendly, then you should communicate with the service using loopback sockets or pipes.
As far as CPU usage, you could just use a bunch of "sleep" calls within your code that searches files to make sure it doesn't hog the CPU and IO ports. Or use some kind of interval notification to allow it to search in chunks periodically.
Greetings,
I have a large piece of software developed in Eiffel. It is possible to use this code from C++, but it loads Eiffel runtime, and I can't trust the Eiffel code and runtime to be thread safe, when accessed by multiple threads from C++
I need to turn this native code into a service, but I would like to scale to multiple servers in case of high load. I don't want to delegate the scaling aspect to Eiffel code & runtime, so I'm looking into wrapping this code with existing scalability options.
Is there anything under Apache web server that'd let me provide thread safe access to this chunk of code? How about a pool of Eiffel code instances? What I have in mind is something like this:
[lots of client requests over network] ---> [Some scalable framework] --> [One or more instances of expensive to create Eiffel code]
I'd like the framework to let me wrap multiple instances of expensive chunks of code and I'd like to scale this up just like a web farm, by adding more machines.
Best Regards
Seref
If you're not tied to Apache but any other framework would suffice, I suggest you check out the ZeroMQ message passing framework. Its ZMQ_PUSH/ZMQ_PULL model with zmq_tcp transport seems to do what you want.
Your setup would be something like: one "master" process servicing outside requests (in any language/platform, perhaps an Apache mod) and a runtime-configurable number of C++ worker processes that call into Eiffel code and push results back.
I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?