Tool for detecting concurrency problems - concurrency

I saw a tool which can tell you if you have design problem in your project and I'm wondering if there is a tool which can tell you dynamically if there are some concurrency problems in your project.

Chess from MS Research is great (http://research.microsoft.com/en-us/projects/chess/)
It detects Concurrency Bugs with the help of Unittests and important: they are reproducable with chess.

I think you will find that detecting this type of thing by static code analysis is basically the Halting Problem in disguise and is thus undecidable in the general case. Such a tool almost certainly doesn't exist.
The closest thing to a proving tool that does exist is to model the computation as 'Communicating Sequential Processes', which can be subjected to formal mathematical reasoning. However, this doesn't allow you to produce a tool that can take an arbitrary program in an arbitrary language and compute a proof for it.

Related

Is it possible for programmer to analyze unknown code fast?

I got a task related to ANCIENT C++ project which hasn't any documentation, comments at all and all code/variables is written in foreign language. Do I have a chance to analyze this code in a 1 working day and make a design/UML to create new features? I have been sitting around for 3 hours already and I feel so frustrated... Maybe somebody also had same problem? Any advice?
BR,
I suspect the biggest issue may be the fact that it's in a foreign language. You can use various static code analysis tools to try and understand what's going on, but if everything is presented in an unfamiliar language then that's still no use. Your first step (I believe) is to find someone who can speak this language and get them to translate as you go...
1) Use Doxygen , You can configure doxygen to extract the code structure from undocumented source files.
2) Use source Insight, Source Insight is an advanced code editor and browser with built-in analysis for C/C++, C#, and Java programs
Short answer, no - you probably don't have a chance to understand the code in one day. Reading/maintaining code is one of the hardest things to do, especially when it's lacking documentation. The fact that the code is in a foreign language (!) makes it even harder.
Sounds like you are on a very restricted (unrealistic) time-budget, but Working With Legacy Software is a good book if you're working with legacy systems. If you are planning to keep adding new features to the legacy system it's your responsibility to make your management aware of the scope of the operation. Or at least try.
Under this time constraint (1 day) it may or may not be doable depending on the size of the project - if its a few hundred lines of code then for sure. If its a serious project with several tens of thousands code lines, then likely no.
The first thing you need to know is what is this program supposed to do at all. If you have no idea what it does and how it does it, then analyzing the code will give you the answer but it will be a long and frustrating task. So my first suggestion would be to get yourself familiar with the outer workings of the software - what does it supposed to do and generally how it is supposed to do it. If you are doing it as part as your work then you should be able to get someone to walk you through using the program - even if its UI is in a foreign language (which I hope it doesn't, even if the code is written by a foreign language speaker).
Once you know what the software is attempting to do, then it should be fairly straight forward (even if lengthy and daunting) to rewrite all the comments in your own language for you to understand. I suggest doing so in a bottoms-up approach: its easier to understand the small and trivial things a program does, then to understand the top-level logic - and a lot of trivial things in order make up the logic of the software.
Only once you understand - to a large degree, anyway - the inner workings of the program you may write its functional spec and work on features.
Non-free way on Windows:
You can use CppDepend. This application is able to parse your visual project or your source files. It gives you a lot of information like dependency trees. You can try the trial (Maybe it will be enough for what you have to do).
Free way multi-platform:
You can use doxygen with a special configuration (extract code structure from undocumented code) and analyze the result.
I was quite happy with a tool called Understand (15-day eval license available) for this kind of task. However, I agree with Guss that the time you'll need depends a lot on the size of the code, and one day is probably just enough for a small program.
cscope & ctags are a must when I do my own code, and even more when looking to other's code.
You may also try this ::
http://www.sgvsarc.com/product_crystalflow.htm

Are some logical paths inherently untestable?

I have been using TDD to drive the project that I am currently working on and the results have been fairly satisfying. I did run into a problem (described here; still without a solution or any suggestions!) where there are some aspects of a particular method which may not be able to be tested (as in my example; briefly, I want to be able to handle a ManagementException which has a specific ErrorCode - but it doesn't seem possible for me to set up a test which throws a ManagementException like that).
So, how does one deal with that? Do we simply accept the fact that some logical paths are untestable (because of the framework that we are working in or limitations in the testing framework(s) that are currently available)?
Some designs do not lend themselves to testability.. especially ones that do not have testability as one of the design goals. Generally TDDed designs do not fall into this category.
To answer your original question, I've posted a response which involves using reflection to slot in the requested error code. However this may not work in all situations and is not a general solution.
The tradeoff here is the effort in writing the test vs the benefit of having that particular piece of code under automated tests. If you feel that the cost to benefit ratio is huge and probability of failure is miniscule, you may write it up as an exceptional manual test, a comment to future developers and verify it manually for now. I'd say be pragmatic, if you've spent 30-40 mins of a couple of developers' brain time trying to get it under test, maybe you need to step back and rethink your strategy. Have a look at Michael Feather's 'Working effectively with legacy code' on some suggestions to overcome barriers to testability.
I don't think you could say that anything is logically untestable, but you will certainly find areas of code where the effort required to test them would be better spent elsewhere.
This is a great question, and one which I also found myself contemplating recently.
So first, I wouldn't say some logical paths are "untestable" - at most they are probably very hard to test with automatic unit testing. You could probably still test most of these problematic paths with some serious heavy duty system tests.
Consider this - anything you test can be thought to run inside a virtual machine under your control and you can (theoretically) simulate every aspect of its operation in order to test your software. Whether or not this is practical for most applications is another question.
I've just tried answering your original question (and collided in midflight with somebody else saying the same thing more concisely, or most of it at least;-). Anyway, there surely exist frameworks that are way too rigid (thanks to private and friends), and if you can't use introspection to go around that (despite having done all proper incantations), then you're just using a language that's too rigid as well as a framework that is.
I'd be astonished if that was the case in an overall system that supports dynamic languages (as .NET now does) such as IronRuby and IronPython -- maybe if C# won't let you go around accessibility limitations via introspection, the dynamic languages could serve.
That said, it is surely possible for the overall environment to be designed so badly and so rigidly to make it impossible to unit-test certain things -- even though I'm not entirely convinced that this is the case in your current situation.
Some things cannot be tested in an automated unit test because the language/framework/situation is just not open to it. The way to handle that is to reduce that area as much as possible and keep it so simple that it is highly unlikely to be a source of bugs or behavior changes later on.
There is also more to testing than just unit testing, and those areas (such as Acceptance testing, QA, etc.) are not covered by unit testing as well.

How to start modification with big projects

I have to do enhancements to an existing C++ project with above 100k lines of code.
My question is How and where to start with such projects ?
The problem increases further if the code is not well documented.
Are there any automated tools for studying code flow with large projects?
Thanx,
Use Source Control before you touch anything!
There's a book for you: Working Effectively with Legacy Code
It's not about tools, but about various approaches, processes and techniques you can use to better understand and make changes to the code. It is even written from a mostly C++ perspective.
First study the existing interface well.
Write tests if they are absent, or expand already written ones.
Modify the source code.
Run tests to check if the modification somehow breaks the older behaviour.
There is another good book, currently freely available on the net, about object oriented reengineering : http://www.iam.unibe.ch/~scg/OORP/
The book "Code Reading" by Diomidis Spinellis contains lots of advice about how to gain an overview and in-depth knowledge about larger, unknown projects.
Chapter 6 is focuses sonely on that topic (Tacking Large Projects). Also the chapters about tooling (Ch. 9) and architecture (Ch. 8) might contain nice hints for you.
However, the book is about understanding (by reading) the "code". It does not tackle directly the maintenance step.
First thing I would do is try to find the product's requirements.
It's almost unthinkable that a product of this size would be developed without requirements.
By perusing the requirements, you'll be able to:
get a sense of what the product (and hence the code) is at least supposed to be doing
see just how well (or poorly) the code actually fulfills those requirements
Otherwise you're just looking at code, trying to divine the intention of the developers...
If you are able to run the code in a PC, you can try to build a callgraph usually from a profiling output.
Also cross referencing tools like cscope, ctags, lxr, etc. Can help a lot. A
Spending some time reading, building class diagrams or even adding comments to the parts of the code you took long to understand are steps towards getting familiar with the codebase and getting ready to modify/extend it.
The first thing you need to do is understand how the code works. Read what documentation there is and then watch the program operate under a debugger. If you watch the main function/loop and then slowly work your way deeper into the program, you can gain a pretty good idea how things are operating. Make sure you write down your findings so others who follow after you have a better position to start from.
Running Doxygen with the EXTRACT_ALL tag set to document all the relationships in the code base. It's not going to help you with the code flow, but hopefully it will shed some light with regards to the structure and design of the entire application.
A very good austrian programmer once told me that in order to understand a program you first have to understand the data-structures that the program uses.

Are C++ static code analyis tools worth it?

Our management has recently been talking to some people selling C++ static analysis tools. Of course the sales people say they will find tons of bugs, but I'm skeptical.
How do such tools work in the real world? Do they find real bugs? Do they help more junior programmers learn?
Are they worth the trouble?
Static code analysis is almost always worth it. The issue with an existing code base is that it will probably report far too many errors to make it useful out of the box.
I once worked on a project that had 100,000+ warnings from the compiler... no point in running Lint tools on that code base.
Using Lint tools "right" means buying into a better process (which is a good thing). One of the best jobs I had was working at a research lab where we were not allowed to check in code with warnings.
So, yes the tools are worth it... in the long term. In the short term turn your compiler warnings up to the max and see what it reports. If the code is "clean" then the time to look at lint tools is now. If the code has many warnings... prioritize and fix them. Once the code has none (or at least very few) warnings then look at Lint tools.
So, Lint tools are not going to help a poor code base, but once you have a good codebase it can help you keep it good.
Edit:
In the case of the 100,000+ warning product, it was broken down into about 60 Visual Studio projects. As each project had all of the warnings removed it was changed so that the warnings were errors, that prevented new warnings from being added to projects that had been cleaned up (or rather it let my co-worker righteously yell at any developer that checked in code without compiling it first :-)
In my experience with a couple of employers, Coverity Prevent for C/C++ was decidedly worth it, finding some bugs even in good developers’ code, and a lot of bugs in the worst developers’ code. Others have already covered technical aspects, so I’ll focus on the political difficulties.
First, the developers whose code need static analysis the most, are the least likely to use it voluntarily. So I’m afraid you’ll need strong management backing, in practice as well as in theory; otherwise it might end up as just a checklist item, to produce impressive metrics without actually getting bugs fixed. Any static analysis tool is going to produce false positives; you’re probably going to need to dedicate somebody to minimizing the annoyance from them, e.g., by triaging defects, prioritizing the checkers, and tweaking the settings. (A commercial tool should be extremely good at never showing a false positive more than once; that alone may be worth the price.) Even the genuine defects are likely to generate annoyance; my advice on this is not to worry about, e.g., check-in comments grumbling that obviously destructive bugs are “minor.”
My biggest piece of advice is a corollary to my first law, above: Take the cheap shots first, and look at the painfully obvious bugs from your worst developers. Some of these might even have been found by compiler warnings, but a lot of bugs can slip through those cracks, e.g., when they’re suppressed by command-line options. Really blatant bugs can be politically useful, e.g., with a Top Ten List of the funniest defects, which can concentrate minds wonderfully, if used carefully.
As a couple people remarked, if you run a static analysis tool full bore on most applications, you will get a lot of warnings, some of them may be false positives or may not lead to an exploitable defect. It is that experience that leads to a perception that these types of tools are noisy and perhaps a waste of time. However, there are warnings that will highlight a real and potentially dangerous defects that can lead to security, reliability, or correctness issues and for many teams, those issues are important to fix and may be nearly impossible to discover via testing.
That said, static analysis tools can be profoundly helpful, but applying them to an existing codebase requires a little strategy. Here are a couple of tips that might help you..
1) Don't turn everything on at once, decide on an initial set of defects, turn those analyses on and fix them across your code base.
2) When you are addressing a class of defects, help your entire development team to understand what the defect is, why it's important and how to code to defend against that defect.
3) Work to clear the codebase completely of that class of defects.
4) Once this class of issues have been fixed, introduce a mechanism to stay in that zero issue state. Luckily, it is much easier make sure you are not re-introducing an error if you are at a baseline has no errors.
It does help. I'd suggest taking a trial version and running it through a part of your codebase which you think is neglected. These tools generate a lot of false positives. Once you've waded through these, you're likely to find a buffer overrun or two that can save a lot of grief in near future. Also, try at least two/three varieties (and also some of the OpenSource stuff).
I've used them - PC-Lint, for example, and they did find some things. Typically they are configurable and you can tell them 'stop bothering me about xyz', if you determine that xyz really isn't an issue.
I don't know that they help junior programmers learn a lot, but they can be used as a mechanism to help tighten up the code.
I've found that a second set of (skeptical, probing for bugs) eyes and unit testing is typically where I've seen more bug catching take place.
Those tools do help. lint has been a great tool for C developers.
But one objection that I have is that they're batch processes that run after you've written a fair amount of code and potentially generate a lot of messages.
I think a better approach is to build such a thing into your IDE and have it point out the problem while you're writing it so you can correct it right away. Don't let those problems get into the code base in the first place.
That's the difference between the FindBugs static analysis tool for Java and IntelliJ's Inspector. I greatly prefer the latter.
You are probably going to have to deal with a good amount of false positives, particularly if your code base is large.
Most static analysis tools work using "intra-procedural analysis", which means that they consider each procedure in isolation, as opposed to "whole-program analysis" which considers the entire program.
They typically use "intra-procedural" analysis because "whole-program analysis" has to consider many paths through a program that won't actually ever happen in practice, and thus can often generate false positive results.
Intra-procedural analysis eliminates those problems by just focusing on a single procedure. In order to work, however, they usually need to introduce an "annotation language" that you use to describe meta-data for procedure arguments, return types, and object fields. For C++ those things are usually implemented via macros that you decorate things with. The annotations then describe things like "this field is never null", "this string buffer is guarded by this integer value", "this field can only be accessed by the thread labeled 'background'", etc.
The analysis tool will then take the annotations you supply and verify that the code you wrote actually conforms to the annotations. For example, if you could potentially pass a null off to something that is marked as not null, it will flag an error.
In the absence of annotations, the tool needs to assume the worst, and so will report a lot of errors that aren't really errors.
Since it appears you are not using such a tool already, you should assume you are going to have to spend a considerably amount of time annotating your code to get rid of all the false positives that will initially be reported. I would run the tool initially, and count the number of errors. That should give you an estimate of how much time you will need to adopt it in your code base.
Wether or not the tool is worth it depends on your organization. What are the kinds of bugs you are bit by the most? Are they buffer overrun bugs? Are they null-dereference or memory-leak bugs? Are they threading issues? Are they "oops we didn't consider that scenario", or "we didn't test a Chineese version of our product running on a Lithuanian version of Windows 98?".
Once you figure out what the issues are, then you should know if it's worth the effort.
The tool will probably help with buffer overflow, null dereference, and memory leak bugs. There's a chance that it may help with threading bugs if it has support for "thread coloring", "effects", or "permissions" analysis. However, those types of analysis are pretty cutting-edge, and have HUGE notational burdens, so they do come with some expense. The tool probably won't help with any other type of bugs.
So, it really depends on what kind of software you write, and what kind of bugs you run into most frequently.
I think static code analysis is well worth, if you are using the right tool. Recently, we tried the Coverity Tool ( bit expensive). Its awesome, it brought out many critical defects,which were not detected by lint or purify.
Also we found that, we could have avoided 35% of the customer Field defects, if we had used coverity earlier.
Now, Coverity is rolled out in my company and when ever we get a customer TR in old software version, we are running coverity against it to bring out the possible canditates for the fault before we start the analysis in a susbsytem.
Paying for most static analysis tools is probably unnecessary when there's some very good-quality free ones (unless you need some very special or specific feature provided by a commercial version). For example, see this answer I gave on another question about cppcheck.
I guess it depends quite a bit on your programming style. If you are mostly writing C code (with the occasional C++ feature) then these tools will likely be able to help (e.g. memory management, buffer overruns, ...). But if you are using more sophisticated C++ features, then the tools might get confused when trying to parse your source code (or just won't find many issues because C++ facilities are usually safer to use).
As with everything the answer depends ... if you are the sole developer working on a knitting-pattern-pretty-printer for you grandma you'll probably do not want to buy any static analysis tools. If you are having a medium sized project for software that will go into something important and maybe on top of that you have a tight schedule, you might want to invest a little bit now that saves you much more later on.
I recently wrote a general rant on this: http://www.redlizards.com/blog/?p=29
I should write part 2 as soon as time permits, but in general do some rough calculations whether it is worth it for you:
how much time spent on debugging?
how many resources bound?
what percentage could have been found by static analysis?
costs for tool setup?
purchase price?
peace of mind? :-)
My personal take is also:
get static analysis in early
early in the project
early in the development cycle
early as in really early (before nightly build and subsequent testing)
provide the developer with the ability to use static analysis himself
nobody likes to be told by test engineers or some anonymous tool
what they did wrong yesterday
less debugging makes a developer happy :-)
provides a good way of learning about (subtle) pitfalls without embarrassment
This rather amazing result was accomplished using Elsa and Oink.
http://www.cs.berkeley.edu/~daw/papers/fmtstr-plas07.pdf
"Large-Scale Analysis of Format String Vulnerabilities in Debian Linux"
by Karl Chen, David Wagner,
UC Berkeley,
{quarl, daw}#cs.berkeley.edu
Abstract:
Format-string bugs are a relatively common security vulnerability, and can lead to arbitrary code execution. In collaboration with others, we designed and implemented a system to eliminate format string vulnerabilities from an entire Linux distribution, using typequalifier inference, a static analysis technique that can find taint violations. We successfully analyze 66% of C/C++ source packages in the Debian 3.1 Linux distribution. Our system finds 1,533 format string taint warnings. We estimate that 85% of these are true positives, i.e., real bugs; ignoring duplicates from libraries, about 75% are real bugs. We suggest that the technology exists to render format string vulnerabilities extinct in the near future.
Categories and Subject Descriptors D.4.6 [Operating Systems]: Security and Protection—Invasive Software;
General Terms: Security, Languages;
Keywords: Format string vulnerability, Large-scale analysis, Typequalifier inference
Static analysis that finds real bugs is worth it regardless of whether it's C++ or not. Some tend to be quite noisy, but if they can catch subtle bugs like signed/unsigned comparisons causing optimizations that break your code or out of bounds array accesses, they are definitely worth the effort.
At a former employer we had Insure++.
It helped to pinpoint random behaviour (use of uninitialized stuff) which Valgrind could not find. But most important: it helpd to remove mistakes which were not known as errors yet.
Insure++ is good, but pricey, that's why we bought one user license only.

Write C++ in a graphical scratch-like way?

I am considering the possibility of designing an application that would allow people to develop C++ code graphically. I was amazed when I discovered Scratch (see site and tutorial videos).
I believe most of C++ can be represented graphically, with the exceptions of preprocessor instructions and possibly function pointers.
What C++ features do you think could be (or not be) represented by graphical items?
What would be the pros and cons of such an application ? How much simpler would it be than "plain" C++?
RECAP and MORE:
Pros:
intuitive
simple for small applications
helps avoid typos
Cons:
may become unreadable for large (medium?) - sized applications
manual coding is faster for experienced programmers
C++ is too complicated a language for such an approach
Considering that we -at my work- already have quite a bit of existing C++ code, I am not looking for a completely new way of programming. I am considering an alternate way of programming that is fully compatible with legacy code. Some kind of "viral language" that people would use for new code and, hopefully, would eventually use to replace existing code as well (where it could be useful).
How do you feel towards this viral approach?
When it comes to manual vs graphical programming, I tend to agree with your answers. This is why, ideally, I'll find a way to let the user always choose between typing and graphical programming. A line-by-line parser (+partial interpreter) might be able to convert typed code into graphical design. It is possible. Let's all cross our fingers.
Are there caveats to providing both typing and graphical programming capabilities that I should think about and analyze carefully?
I have already worked on template classes (and more generally type-level C++) and their graphical representation.
See there for an example of graphical representation of template classes. Boxes represent classes or class templates. First top node is the class itself, the next ones (if any) are typedef instructions inside the class. Bottom nodes are template arguments. Edges, of course, connect classes to template arguments for instantiations.
I already have a prototype for working on such type-level diagrams.
If you feel this way of representing template classes is plain wrong, don't hesitate to say so and why!
Much as I like Scratch, it is still much quicker for an experienced programmer to write code using a text editor than it is to drag blocks around, This has been proved time and again with any number of graphical programming environments.
Writing code is the easiest part of a developers day. I don't think we need more help with that. Reading, understanding, maintaining, comparing, annotating, documenting, and validating is where - despite a gargantuan amount of tools and frameworks - we still are lacking.
To dissect your pros:
Intuitive and simple for small applications - replace that with "misleading". It makes it look simple, but it isn't: As long as it is simple, VB.NET is simpler. When it gets complicated, visual design would get in the way.
Help avoid typos - that's what a good style, consistency and last not least intellisense are for. The things you need anyway when things aren't simple anymore.
Wrong level
You are thinking on the wrong level: C++ statements are not reusable, robust components, they are more like a big bag of gears that need to be put together correctly. C++ with it's complexity and exceptions (to rules) isn't even particulary suited.
If you want to make things easy, you need reusable components at a much higher level. Even if you have these, plugging them together is not simple. Despite years of struggle, and many attempts in many environments, this sometimes works and often fails.
Viral - You are correct IMO about that requriement: allow incremental adoption. This is closely related to switching smoothly between source code and visual representation, which in turn probably means you must be able to generate the visual representation from modified source code.
IDE Support - here's where most language-centered approaches go astray. A modern IDE is more than just a text editor and a compiler. What about debugging your graph - with breakpoints, data inspection etc? Will profilers, leak detectors etc. highlight nodes in your graph? Will source control give me a Visual Diff of yesterday's graph vs. today's?
Maybe you are on to something, despite all my "no"s: a better way to visualize code, a way to put different filters on it so that I see just what I need to see.
The early versions of C++ were originally written so that they compiled to C, then the C was compiled as normal.
What it sounds like you are describing is a graphical language that is compiled to C++, which will then be compiled as normal.
So really you are not creating a graphical C++, you are creating a new language that happens to be graphical. Nothing wrong with that, but don't let C++ restrict what you do, because eventually you may want to compile the graphical language straight to machine code, or even to something like CIL, Java ByteCode, or whatever else tickles your fancy.
Other graphical languages you may want to check out are LabVIEW, and more generally the category of visual programming languages.
Good luck in your efforts.
The complexity of a nontrivial program is usually too high to be represented with graphical symbols, which are low in their information content. Unless your approach is markedly different in some way, I am skeptical that this would be of value based on past efforts.
So, practically speaking, his will be useful only for instructional purposes and very simple programs. But that would still be a great target market for a product like this. sometimes people have trouble grasping the fundamentals, and a visual model might be just the thing to help things click.
Interesting idea. I doubt I'd use it though. I tend to prefer coding in a flat text editor, not even an IDE, and for tough problems I prefer a pad of paper. Most of the really experienced programmers I know work this way, Maybe it's because we grew up in a different environment, but I think it's also because of the way we think about programming. As you get more experience, you start seeing the code in your head more clearly than any GUI tool will show it to you.
As for your question, I'd nominate templates as one of the harder / more interesting sort of thing to try to represent well. They are ubiquitous and carry information that you won't have access to as you are designing your tool. Getting that to the user in a useful way should pose an interesting challenge.
What C++ features do you think could be [...] represented by graphical items?
Object Oriented Design. Hence classes, inheritance, polymorphism, mutability, const-ness etc. And, templates.
What would be the pros and cons of such an application?
It may be easier for beginners to start writing programs. For the experienced, it may be get rid of the boring parts of programming.
Think of any other code generator. They create a framework for you to write the more involved portion(s). They also lead to bloated-code (think of any WYSIWYG HTML editor).
The biggest challenge, as I see it, is that any such UI necessarily hinders the user's imagination.
How much simpler would it be than "plain" c++ ?
It can be a real pain, when you wade through truckloads of errors which is typical of code generators.
Further, since a lot of code is generated, you have no idea of what is going on -- debugging becomes difficult.
Also, for the experienced there may be some irritation to find that the generated code is not per their preferred coding style.
I prefer hot-keys instead graphical menus and buttons.
And I think same thing will happen with graphical development tool. Many peoples will prefer manual codding.
But, source code visualizer - should be nice thing.
I like the idea, but I suspect there comes a point where things get far too complicated to be represented graphically.
However, given recent experience at work; it would be useful to give such a graphical interface to a non-techie person to use to create basic drag-and-drop programs, leaving myself free to get on with some "proper" programming ;-) If it can do the job of allowing somebody non-skilled to build something functional it can be a very good thing (even if programming logic escapes them)
There comes a point in such a system where it becomes easier to define what you want to do using literal C++ code, rather than have a user interface getting in the way; it can get frustrating to the sessioned programmer knowing the precise code that needs to be written but then only being limited to the design GUI. I'm specifically thinking about a more common application, such as html editors/designers in which they allow newbies to build their websites without knowing any html at all.
It would be interesting to see how such a system would handle the dynamic allocation of memory, and the different states of a program as time progressed; I suspect that there are some very basic programming concepts that may be difficult to represent graphically.. Polymorphism..? Virtual Classes, LinkList, Stacks/Circular Queues. I wonder for a moment how you would explain an image compression algorithm (such as jpg) successfully too without the help of a gigantic display screen.
I also wonder if such a system would even go to such a low level, and whether you would be dealing with abstracted concepts and the compiler would be working out the best way to do something.
I've been working on a new model-driven software development paradigm named ABSE (http://www.abse.info) that supports end-user programming: It's a template-based system that can be complemented with transformation code. I also have an IDE (named AtomWeaver) implementing ABSE that is in pre-alpha stage right now.
With AtomWeaver, as an expert/architect, you build your knowledge Templates, and then the developers (or end-users if you make your meta-models simpler) can just "assemble" systems by building blocks, and then filling template parameters in form-style editors.
At the end, pressing the "Generate" button will create the final system as specified by the architect/expert.
I'm surprised you think function pointers would be a particular problem. How about anything at all to do with pointers?
A programming language can be represented by a hierarchy of nodes - that's exactly what the compiler turns it into. It is very strange that the UI for editing programs is still a sequence of characters that get parsed, because the degrees of freedom in the editor is way larger than the available set of allowed choices. But intellisense helps to reduce this problem a lot.
C++ would be a strange choice to base such a system on.
I think the major problem of this kind of IDEs are that the code generated becomes unmantainable easily.
This happened to Delphi. It's a really nice tool to develop some kind of applications, however, when we start adding complex relationships between the components, start adding Design Patterns, etc. the code grows to an unmantainable size.
I believe it's also because graphical tools don't apply the concept of MVC (or if they do, it's only in the way that the IDE understands).
It can be really helpful for prototypes and very small applications that don't tend to grow, otherwise it can become a mess for the developer(s)