I'm working on an interactive command-line tool. The tool shows a prompt, and the user can enter commands and parameters which are being processed. After a command was executed, thhere is a new prompt and the user can proceed entering commands. It is very similar to gdb debugger, when used in cli mode.
The tool is mostly written in C++, with some wrappers for using C librarys.
I'd like to attach a GUI (QT would be my first choice here) to my tool, however, I'm not sure how to do this.
If you search the internet, many Unix developers prefer to strictly separate back- and frontend.
So, I'm thinking about making the GUI a separate executable, which just uses the functionality of my command line tool.
What's the best way to achieve this?
Should I use interproces communication with pipes or sockets?
gdb, for example, uses TCP/IP, allowing to even run the GUI not on the same machine as the server! (However, this feature is not necessary).
If using some kind of IPC, how should the communication work? Should I use an ASCII interface (The Art Of Unix Programming prefers this)? This would have the advantage that my GUI just needs to parse the output of my command line tool. I don't have to change my tool very much, because it does not make much difference if the tool writes to a socket/pipe or just to cout.
If so, should I define a protocol for IPC or just parse the input/output?
Another way would be to integrate the GUI to my tool directly, resulting in just one executable. Insight debugger does it this way. Insight not just "uses" gdb, it has its own gdb in its programm code.
This way I won't have to write a parser, my GUI code can just call functions from my "base code".
Or should I make my command line tool a library, which I can link with a cli- or a GUI frontend?
What would be the best way to solve my problem?
What are the avantages/disadvantes of the solutions above?
What do you prefer?
As you noted in the question, there are several ways you can architect two pieces of software. There are three questions you want to ask yourself:
what's the relationship between the two code segments.
how likely is each code segment going to change in the future.
how is your code going to be deployed
If one code segment is strictly a layer of abstraction on top of the other layer (the CLI code in your case), the core CLI functionality is relatively mature and stable, whereas you find it likely the the GUI will likely change often, this might suggest that you make the CLI code a library, and have the GUI code include the library and "call down" to the CLI code. If, on the other hand, the GUI code ties a bunch of pieces of software together, whereas the CLI code is a smaller, more isolated module and will likely change at a higher frequency rate (think dvd inside a dvd player) you might consider making your GUI a framework that imports different 'engines', or CLI modules. If you expect the two code segments to have a side-by-side relationship, for example the GUI might make HTTP requests to download images while the CLI code might do some CPU intensive number crunching in the background, then you might want to explore having both the CLI and GUI running as separate threads and communicating with other. [Depending on how closely coupled the two code segments are you can explore separate threads (no coupling), occasional message passing (message-queues) to using threads and fine-grained locks (very tightly coupled).]
separate processes (with the optional socket interface to run on different machines) are very useful when breaking down large software deployments into smaller, modular units. As long as your code can still fit in your head, and is less than roughly ~10,000 lines of code the added code complexity and latency isn't justified.
From your description it sounds as though the CLI code is mature, and the GUI is going to be simply a shell which interacts with the CLI. I also understand that you intended to ship your code as an executable. Therefore I think your use cases falls in line with making your CLI code a library, and writing a CLI shell and GUI shell that interface with it.
Related
I hope you are doing good and i really appreciate your help here for my query.
We have our system T3000 written in C++ (http://www.temcocontrols.com/ftp/software/9TstatSoftware.zip and codes are available here https://github.com/temcocontrols/T3000_Building_Automation_System).
I am trying to integrate 'BIRT reporting tool' in my C++ application. I want to create report based on the data available in our T3000 system. I think BIRT is embeddable (??). We don't need to compile and change the project, just need to be able to call it from T3000.exe mainly.
My thinking is we may put one menu label in existing T3000 and try to display report in user single click.
Can you please help me to solve my issue with 'BIRT' ? I really appreciate your answer.
Regards
Raju
Well, the answer depends on what your definition of "embeddable" is.
BIRT is written in pure Java.
I could think of 3 different ways:
Of course it is possible to integrate Java code into an existing C/C++ program (see Embed Java into a C++ application?).
You could just use the BIRT runtime engine and generate the report as PDF or HTML from the command line (that means, basically you call the java executable from your program with several arguments). See Birt - How to run report engine on the console? and http://eclipser-blog.blogspot.de/2008/02/automatic-generation-of-birt-reports.html for more information.
You could run a Java web server like Tomcat in a second process and then start your report by calling a http URL (e.g. you could use the included Servlet example). See http://www.eclipse.org/birt/documentation/integrating/viewer-usage.php
Similar to 3. (see below)
Some notes:
The second option is slow, due to the Java and BIRT engine startup overhead (this may take several seconds). With the first and third option, the startup overhead is or can be minimized to only once (and for each report).
For the second and third option it may be necessary to modify the existing code of the example programs to suit your needs.
The first option is probably the best for an industry-quality solution, but it is also the most difficult to develop.
Anyway, Java skills are necessary IMHO.
If you plan to run this on a SOC instead of a PC, take performance into account.
Is a Java-based solution well-suited for this kind of hardware? BIRT needs quite a lot of RAM and CPU (for a SOC). Hardware like the Raspi 3 should handle this quite easily, I reckon.
I integrated the BIRT runtime into an existing Python application (all this running on an application server) in a fourth way: I wrote a listener program that listens on a TCP socket for BIRT tasks. It uses a pool of worker processes (written in Java) which in turn use the BIRT report engine to generate the output. The client program (here: written in Python) opens a TCP connection to the listener and uses this socket to tell it which report to generate (including report parameters and destination file name). The listener program then in turn chooses a worker process for the task and gives the task to the worker process.
So, basically, this fourth option is similar to the third one, with two differences:
The communication is socket-based (instead of http), allowing bi-di communication.
The architecture is multi-processes instead of multi-threading. We choose this because very large reports could cause out-of-memory errors for otherwise unrelated reports that just happen to run at the same time. It's the same basic architecture Oracle chose for their reports server.
However, developing the programs took months.
HVB: I have to give you more than a simple thanks for the explanation above, this info will save us time I am sure. Raju will be sharing our experience after we get into the project a little deeper so others can benefit.
I've been looking into centralising my computer game saves to make it easier to backup and restore as well as putting them up on the cloud via dropbox but there in so may places that it makes it quite difficult. I noticed the Windows 7 and Vista now support Symbolic links so I've been playing around with that but I was wonder the follow:
Is it possible (code example or a point in the right direction) for an application (vb.net or C++) to spoof a file or folder?
E.g. Application A (a game like Diablo III or Civilization V) attempts to read or right from file A (the game save), application B (the save repository) detects this read/write request and pipes the request through itself preforming the request on file B (the actual game save in another location). Application A is in no way altered and treats the file normally.
Note: I realise there are many simple ways of preforming the same task in essence such as monitoring the use of Application A or periodically checking file A and copying it if it has been altered since the last check etc but all these methods have draw backs and less interested in making it work than if it is possible.
It is entirely possible to do this through a file system filter driver. For information about these, take a look here:
http://msdn.microsoft.com/en-us/windows/hardware/gg462968
Filter drivers can hook into CreateFile operations and redirect the create to a different place if you want, but they are much harder to write as compared to normal applications. They run in kernel mode and must obey the limitations of drivers.
You can "fake" special folders, like control panel does, but I don't think you can create anything accessible/writeable (in an easy way). I might be wrong though. I had the same idea once too (as a compatibility step for some company stuff), but couldn't find anything supporting an easy way to do it. It seems like it might be easier to be done on Unix systems (but that's obviously no option here). Also, I wouldn't expect any nice or easy solutions for .net.
Only approach I could think about right now, would be highjacking the according API calls (e.g. FileOpen) to reroute/manipipulate them (similar to what root kits do), but I wouldn't say that's a good idea, considering it might be detected as possible malware or cheats by things like punkbuster or antivirus solutions.
Yes or no depending on (using your terms) the level of abstraction that Application A is using.
If Application A is performing a CreateFile wto start access and passing a fixed filesystem path then Application B would need to emulate a file system and do so in the kernel.
On the other hand if Application A were to user HTTP with RESTful URLs then the HTTP server could answer all requests from files or by dynamically creating the content.
So the question can only be answered in specific by knowing the details of Application A.
I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics:
ClearCase revision control
Approx 80% C++; 15% Java; 5% script or low-level
Compiles for Green Hills Integrity OS, but also some windows and JVM chunks
Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...)
Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems)
Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages
Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..)
Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.)
Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related.
Cost is not the driving concern. Scalability and ease of retrofitting onto an existing infrastructure are key.
(Edited to address #Dan's comment. ;-)
From my experience with similar systems, there are approximately two parts to this problem:
A repeatable method for checking out sources, building the software, and testing it (if you want to do continual testing as well as building), using a small number of command-line invocations.
A means of calling these command lines on various servers in the build farm.
For the latter, we've been using BuildBot, which seems to work pretty well.
For the former, we have a homegrown solution that started out as a simple bash shell script and grew ... rather substantially. From experience, I'd suggest starting out in python rather than bash -- you'll spend far more code in handling setup and configuration than in actually invoking programs. (Also, it's probably easier to run it on Windows if you're doing that.)
The things I've found to be really key in our script's usefulness are:
Ironclad repeatability. We have a standard set of build tools, and the scripts start out by scrubbing environment variables. There are very few command-line options; everything goes into configuration files, and those go in version control.
Logging. We produce a log of every command that the build script executes.
Configuration file inheritance. Each variant of our software gets a configuration file, and those files can include more-general settings (which include even-more-general settings).
Extensibility. When we add a new source component, it's pretty easy to add a set of instructions for building that component (and the instructions can be arbitrary bash code). The "can be arbitrary code" part is probably key here; no way is a pre-existing product going to be able to do all of the quirky things that you need for a large complex real-world system.
You can get started with a reasonably simple script and let it grow organically as the need arises; honestly, although ours is a bit messy, I think we got a much more usable result that way than we would have with heavy top-down design.
Cost isn't an object? I've worked for GreenHills, and they've solved these issues for their in-house build/test farms. Ask them to do the same for you.
When I see emphasis on things like scalability and security in a build system, I start thinking that you might be a candidate for the enterprise class build systems / CI systems. Conveniently, it sounds like you can afford them as well. A year old SD Times article provides a basic breakdown between the enterprise and team level build tools.
My company makes AnthillPro and we've worked with a number of companies on large embedded projects as well as highly secure projects. IBM is probably the largest other player in the space with BuildForge.
AnthillPro puts some extra emphasis on what you do with the images in the minutes/hours/days post build (do you install them onto simulators / hardware and run automated tests? stage them? promote them?) but we also see folks using it for just build.
I want to store a lot of configuration data pertaining to cluster, process, IP addresses etc. I have worked on one such product earlier where LDAP was used for this purpose. Although it was PITA to configure it the first time, I liked the transactional LDAP part which helps in dynamic reloading of the configuration when there is a change. It can be done with a flat file using inotify, but that is not as good as transactional LDAP. But, as I said, the configuration was a real pain, and also I don't want to borrow the same idea of LDAP in this product.
So can anyone give me an idea about which will be the next best replacement, which makes entering configuration easy and also that can help in dynamic configuration and notify my process whenever there is a change in the configuration file and exactly what changed (directly or indirectly)?
I am planning to develop my product in C++ and C.
The configuration can be edited by an Admin, or if he is too lazy he can automate it using some script. Also through cli, but not by a running process, that will land me up in concurrency and locking issues.
My program is a daemon, some sort of cluster manager running on multiple nodes.
There is no wrapper provided for user to edit configuration.
I am only looking for Linux/Solaris platform.
You have not really given enough background information for a good answer to be given. So, here are some of the unasked questions, the answers to which will influence your choice:
How is the configuration file edited? By your process, or by hand-editing, or by some other program?
How is the main program running - in the foreground with a user interacting, or in the background as a daemon?
If you expect people to hand-edit the configuration, then you can provide a wrapper script for doing so which sends a signal (conventionally SIGHUP) to the daemon to tell it to reread its configuration file.
If your main program is going to guide the user through the editing, then you really don't need to tell the program when the editing is complete. It already knows.
You mention Linux in the tags; can we assume that Windows portability is not an issue?
As to configuration file formats, you can go with the vogue (and bloat) of using XML. However, although that is a good tool for programs communicating, it is not very good for people to edit. You should look at E S Raymond's "The Art of UNIX Programming" which is a good general read and has a chapter on different configuration file formats. You should probably adopt one of the schemes outlined there. Which scheme is best depends in part on what information you have to capture in your configuration file.
If you're going to embed an interpreter (Perl, Lua, Tcl/Tk, ...) into your program, you might use that language to handle the configuration file...or you might not.
How do you unit test a large MFC UI application?
We have a few large MFC applications that have been in development for many years, we use some standard automated QA tools to run basic scripts to check fundamentals, file open etc. These are run by the QA group post the daily build.
But we would like to introduce procedures such that individual developers can build and run tests against dialogs, menus, and other visual elements of the application before submitting code to the daily build.
I have heard of such techniques as hidden test buttons on dialogs that only appear in debug builds, are there any standard toolkits for this.
Environment is C++/C/FORTRAN, MSVC 2005, Intel FORTRAN 9.1, Windows XP/Vista x86 & x64.
It depends on how the App is structured. If logic and GUI code is separated (MVC) then testing the logic is easy. Take a look at Michael Feathers "Humble Dialog Box" (PDF).
EDIT: If you think about it: You should very carefully refactor if the App is not structured that way. There is no other technique for testing the logic. Scripts which simulate clicks are just scratching the surface.
It is actually pretty easy:
Assume your control/window/whatever changes the contents of a listbox when the user clicks a button and you want to make sure the listbox contains the right stuff after the click.
Refactor so that there is a separate list with the items for the listbox to show. The items are stored in the list and are not extracted from whereever your data comes from. The code that makes the listbox list things knows only about the new list.
Then you create a new controller object which will contain the logic code. The method that handles the button click only calls mycontroller->ButtonWasClicked(). It does not know about the listbox or anythings else.
MyController::ButtonWasClicked() does whats need to be done for the intended logic, prepares the item list and tells the control to update. For that to work you need to decouple the controller and the control by creating a interface (pure virtual class) for the control. The controller knows only an object of that type, not the control.
Thats it. The controller contains the logic code and knows the control only via the interface. Now you can write regular unit test for MyController::ButtonWasClicked() by mocking the control. If you have no idea what I am talking about, read Michaels article. Twice. And again after that.
(Note to self: must learn not to blather that much)
Since you mentioned MFC, I assumed you have an application that would be hard to get under an automated test harness. You'll observe best benefits of unit testing frameworks when you build tests as you write the code.. But trying to add a new feature in a test-driven manner to an application which is not designed to be testable.. can be hard work and well frustrating.
Now what I am going to propose is definitely hard work.. but with some discipline and perseverance you'll see the benefit soon enough.
First you'll need some management backing for new fixes to take a bit longer. Make sure everyone understands why.
Next buy a copy of the WELC book. Read it cover to cover if you have the time OR if you're hard pressed, scan the index to find the symptom your app is exhibiting. This book contains a lot of good advice and is just what you need when trying to get existing code testable.
Then for every new fix/change, spend some time and understand the area you're going to work on. Write some tests in a xUnit variant of your choice (freely available) to exercise current behavior.
Make sure all tests pass. Write a new test which exercises needed behavior or the bug.
Write code to make this last test pass.
Refactor mercilessly within the area under tests to improve design.
Repeat for every new change that you have to make to the system from here on. No exceptions to this rule.
Now the promised land: Soon ever growing islands of well tested code will begin to surface. More and more code would fall under the automated test suite and changes will become progressively easier to make. And that is because slowly and surely the underlying design becomes more testable.
The easy way out was my previous answer. This is the difficult but right way out.
I realize this is a dated question, but for those of us who still work with MFC, the Microsoft C++ Unit Testing Framework in VS2012 works well.
The General Procedure:
Compile your MFC Project as a static library
Add a new Native Unit Test Project to your solution.
In the Test Project, add your MFC Project as a Reference.
In the Test Project's Configuration Properties, add the Include directories for your header files.
In the Linker, input options add your MFC.lib;nafxcwd.lib;libcmtd.lib;
Under 'Ignore Specific Default Libraries' add nafxcwd.lib;libcmtd.lib;
Under General add the location of your MFC exported lib file.
The https://stackoverflow.com/questions/1146338/error-lnk2005-new-and-delete-already-defined-in-libcmtd-libnew-obj has a good description of why you need the nafxcwd.lib and libcmtd.lib.
The other important thing to check for in legacy projects. In General Configuration Properties, make sure both projects are using the same 'Character Set'. If your MFC is using a Multi-Byte Character Set you'll need the MS Test to do so as well.
Though not perfect, the best I have found for this is AutoIt http://www.autoitscript.com/autoit3
"AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows out-of-the-box with no annoying "runtimes" required!"
This works well when you have access to the source code of the application being driven, because you can use the resource ID number of the controls you want to drive. In this way you do not have to worry about simulated mouse clicks on particular pixels. Unfortunately, in a legacy application, you may well find that the resource ID are not unique, which may cause problems. However. it is very straightforward to change the IDs to be unique and rebuild.
The other issue is that you will encounter timing problems. I do not have a tried and true solution for these. Trial and error is what I have used, but this is clearly not scalable. The problem is that the AutoIT script must wait for the test application to respond to a command before the script issues the next command or check for the correct response. Sometimes it is not easy to find a convenient event to wait and watch for.
My feeling is that, in developing a new application, I would insist on a consistent way to signal "READY". This would be helpful to the human users as well as test scripts! This may be a challenge for a legacy application, but perhaps you can introduce it in problematical points and slowly spread it to the entire application as maintenance continues.
Although it cannot handle the UI side, I unit test MFC code using the Boost Test library. There is a Code Project article on getting started:
Designing Robust Objects with Boost
Well we have one of these humongous MFC Apps at the workplace. Its a gigantic pain to maintain or extend... its a huge ball of mud now but it rakes in the moolah.Anyways
We use Rational Robot for doing smoke tests and the like.
Another approach that has had some success is to create a small product-specific language and script tests that use VBScript and some Control handle spying magic. Turn common actions into commands.. e.g. OpenDatabase would be a command that in turn will inject the required script blocks to click on Main Menu > File > "Open...". You then create excel sheets which are a series of such commands. These commands can take parameters too. Something like a FIT Test.. but more work. Once you have most of the common commands identified and scripts ready. It's pick and assemble scripts (tagged by CommandIDs) to write new tests. A test-runner parses these Excel sheets, combines all the little script blocks into a test script and runs it.
OpenDatabase "C:\tests\MyDB"
OpenDialog "Add Model"
AddModel "M0001", "MyModel", 2.5, 100
PressOK
SaveDatabase
HTH
Actually we have been using Rational Team Test, then Robot, but in recent discussions with Rational we discovered they have no plans to support Native x64 applications focusing more on .NET, so we decided to switch Automated QA tools. This is great but licensing costs don't allow us to enable it for all developers.
All our applications support a COM API for scripting, which we regression test via VB, but this tests the API no the application as such.
Ideally I would be interested on how people integrate cppunit and similar unit testing frameworks into the application at a developer level.