This seems to be typical application:
1. One part of the program should scan for audio files in background and write tags to the database.
2. The other part makes search queries and shows results.
The application should be crossplatform.
So, the main search loop, including adding data to database is not a problem. The questions are:
1. What is the best way to implement this background working service? Boost(asio) or Qt(services framework?)?
2. What is the best approach, to make a native service wrapper using mentioned libraries or emulate it using non gui application?
3. Should I connect gui to the service(how they will communicate using boost or qt?) or directly to the database (could locks be there?)?
4. Will decsision in point 1 consume all CPU usage? And how to avoid that? How to implement scanning for files less cpu consumable?S
I like to use Poco which has a convenient ServerApplication class, which can be used in an application that can be run as either a normal command-line application, or as a Windows service, or as a *nix daemon without having to touch the code.
If you use a "real" database (MySQL, PostgreSQL, SQL Server), then querying the database from the GUI application is probably fine and easier to do. If you use another type of database that isn't necessarily multi-user friendly, then you should communicate with the service using loopback sockets or pipes.
As far as CPU usage, you could just use a bunch of "sleep" calls within your code that searches files to make sure it doesn't hog the CPU and IO ports. Or use some kind of interval notification to allow it to search in chunks periodically.
Related
Is it possible to use a Go API in a Qt C++ project?
I would like to use the following Google API written in Go: https://cloud.google.com/speech-to-text/docs/reference/libraries#client-libraries-install-go
Is it possible to use a Go API in a Qt C++ project?
It could be possible, but it might not be easy and would be very brittle to run Go and Qt code in the same process, since Go and Qt have very different thread (goroutine) and memory models.
However, Go has (in its standard library) many powerful packages to ease the development of server programs, in particular of HTTP or JSONRPC servers.
Perhaps you might consider running two different processes using inter-process communication facilities. Details are operating system specific. I assume you run Linux. Your Qt application could then start the Go program using QProcess and later communicate with it (behaving as a client to your Go specialized "server"-like program).
Then you could use HTTP or JSONRPC to remotely call your Go functions from your Qt application. You need some HTTP client library in Qt (it is there already under Qt Network, and you might also use libcurl) or some JSONRPC client library. Your Go program would be some specialized HTTP or JSONRPC server (and some Google Speech to Text client) and your Qt program would be its only client (and would start it). So your Go program would be some specialized proxy. You could even use pipe(7)-s, unix(7) sockets, or fifo(7)-s to increase the "privacy" of the communication channel.
If the Google Speech to Text API is huge (but it probably is not) you might use Go reflective or introspective abilities to generate some C++ glue code for Qt: go/ast, go/build, go/parser, go/importer, etc
BTW, it seems that Google Speech to Text protocol is using JSON with HTTP (it seems to be some Web API) and has a documented REST API, so you might directly code in C++ the relevant code doing that (of course you need to understand all the details of the protocol: relevant HTTP requests and JSON formats), without any Go code (or process). If you go that route, I recommend making your Qt (or C++) code for Google Speech to Text some separate free software library (to be able to get feedback and help from outside).
I want to build an application which is based on two separate processes. One of them (Process 1) is using Qt4 for accessing the functionalities of a legacy code base. The other one (Process 2) is the UI layer of the application using Qt5.
I'll need to access the functions of Process 1 from Process 2, and I'll need to access the results of Process 2 from Process 1.
Can anyone suggest a best practice for connecting the two processes via IPC?
http://doc.qt.io/qt-4.8/ipc.html
According to the link you have to choose between TCP/IP (QNetworkAccessManager etc.) or Shared Memory with (QSharedMemory). In your case DBUS would not be a good idea as you are working on windows.
I can also suggest to have a look at QProcess, through that you can make your QT5 application execute your QT4 application and collect the result from standard output.
It depends a lot on how much data you need to exchange and how flxible you are with your legacy stuff.
Personally if it is possible I would go for the QProcess.
I am writing an application in C++ in windows, that has a UI (WxWidgets) and user normally use the application via its UI.
Now I have a new requirement, the application needs to start and controlled by another application.
I can not develop a DLL or similar solutions.
I have access to my code (apparently!) and the other applications is developed by other users, but I can give them details on how to control my application.
My question is: How can I allow other applications to control my application via a defined interface?
For simplicity assume that I developed a calculator (has UI) and I want to give other application to do math on my application (for example they may ask my application to add two numbers and so on, As the math is very time consuming, I need to inform them about progress and any error that generate during processing.
Can I open a pipe to communicate?
Any other way to achieve this?
You can use pipes or tcp/sockets with a custom protocol, but probably it's better if you split your application in two parts:
One part that does the computation
The user interface
and publish the first one as an http server responding to JSON requests.
Using a standard protocol can ease up testing and increases interoperability (you can also probably leverage already existing libraries for both implementing the server and the JSON marshalling).
Note that in addition to accepting commands, any error message you are going to show for example in a message box or any other nested event loop like dialog boxes need to be rewired properly; this can be very problematic if message or dialog box come up as the result of calls to external code that you didn't write yourself.
This is the typical change that would have costed 10 if done early and that will cost 1000 now.
Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.
I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?