Removing dependencies from statechart framework - c++

I've got lots of problems with project i am currently working on. The project is more than 10 years old and it was based on one of those commercial C++ frameworks which were very populary in the 90's. The problem is with statecharts. The framework provides quite common implementation of state pattern. Each state is a separate class, with action on entry, action in state etc. There is a switch which sets current state according to received events.
Devil is hidden in details. That project is enormous. It's something about 2000 KLOC. There is definitely too much statecharts (i've seen "for" loops implemented using statecharts). What's more ... framework allows to embed statechart in another statechart so there are many statecherts with seven or even more levels of nesting. Because statecharts run in different threads, and it's possible to send events between statecharts we have lots of synchronization problems (and big mess in interfaces).
I must admit that scale of this problem is overwhelming and I don't know how to touch it. My first idea was to remove as much code as I can from statecharts and put it into separate classes. Then delegate these classes from statechart to do a job. But in result we will have many separate functions, which logically don't have any specific functionality and any change in statechart architecture will need also a change of that classes and functions.
So I asking for help:
Do you know any books/articles/magic artefacts which can help me to fix this ? I would like to at least separate as much code as I can from statechart without introducing any hidden dependencies and keep separated code maintainable, testable and reusable.
If you have any suggestion how to handle this, please let me know.

The statechart pattern is intended to be used specifically to remove switch statements, so this sounds like a horrid abuse. Additionally, states should only change on asynchronous events. If you are processing an event and you change through multiple states (or for loop, etc.), then this is also a horrid abuse of the pattern.
I would start from these two points, as they will solve much of your concurrency issues just fixing them up. What you need to determine is:
What are your external, asynchronous events to the system? These are the only things that should be determining state transitions, not things that happen during event processing. An event may cause 0 or 1 state transitions. Once you have a list of these state transitions, you can reconstruct the actual states of your system. If you are aware of UML State diagrams, this would be a perfect time to sketch one up in a charting program, not just for yourself (though it will help you immensely), but also for everyone in the future that has to return to the project. As you have learned, this happens.
Now that you know what are really states, list what are states in the code that shouldn't be. This usually indicates that something can be "functionally decomposed". Instead of a state object for each of these, likely all that is needed is a separate function. This will cut down on a lot of the overhead of state objects and should clean up the code immensely.
Now it's time to tackle those horrendous switch statements you mentioned. If they were truly based on state, you shouldn't need one at all. Instead, you should be able to call the state machine directly.
Something like:
myStateMachine->myEvent();
and it should work without any switch. But notice, this may be the case even for some of those objects that don't work across asynchronous events. This is also an indication of where you may just use inheritance to get the same effect. If you have:
switch (someTypeIdentifier)
{
case type1:
doSomething();
break;
case type2:
doSomethingElse();
break;
}
usually the correct OOP method to do is to create two actual types Type1, Type2, both derived from an abstract base TypeBase, with a virtual method doSomething() that does what you need. The reason this is useful is because it means you can "close" the handling (in the meaning of the Open/Closed Principle), and still extend the functionality by adding new derived types as needed (leaving it open to extension). This saves bugs like crazy because it gets developers hands out of those switch statements, which can get quite ugly and convoluted, instead encapsulating each separate behavior in separate classes.
4 - Now look to fix up your thread issues. Identify all objects used from multiple threads. Make a list. Now, how are these used? Are some of them always used together? Start making groups. The goal here is to find the level of encapsulation that best works for these objects, separate the objects into individual classes that control their own synchronisation, figure out the atomic level of actual "transactions" for the objects, and make methods of the classes that expose those meaningful transactions, wrapped behind the scenes with the appropriate mutexes, condition variables, etc.
You might be saying "that sounds like a lot of work! Why do all that instead of just writing it all over myself?" Good question! :) The reason is actually straightforward: if you are going to do it all by yourself, those are the steps you should be doing anyway. You should be identifying your states, your dynamic polymorphism, and getting a handle on the multithreaded transactions. But, if you start with the existing code, you also have all of those unspoken business rules that were never documented and may cause all sorts of unexpected bugs down the line. You don't have to bring everything over - if you suspect it's a bug, discuss the logic with the people who have worked with the system in the past (if available), QA, or whoever might identify bugs, and see if it really should be carried over. But you need to actually evaluate what the bugs are either way, or you may not code something that actually needed coding.
In the end, this is a manual process that is a part of software engineering. There are CASE tools that can help draw up the state diagrams and even publish them to code, there are refactoring tools, like those found in many IDEs, that can help move code between functions and classes, and similar tools which can help identify threading needs. However, those things shouldn't be picked up for a single project. They need to be learned throughout your career, picking them up and learning them more deeply over years of work, as they are a part of being a software engineer. They don't do it for you. You still need to know the whys and hows, and they just help get it done more efficiently.

Statecharts (including nested Statecharts) are a powerful way to specify, understand and even simulate/validate complex control flow. But to gain the benefit, you need the statechart model in a suitable tool (I used Statemate way back in the day, not sure if it's still available), plus a reliable mapping from the chart to the code (Statemate used to generate the code) - then you can forget about the state management code (mostly)! In your situation, if you don't have the model, I would try to reverse one from the code - as Ira says, chances are high that the original developers had a model in some form, and you may find the code making a lot of sense as the model emerges. If this works out, you will have a really good spec/model of the code which should make future code edits much easier (even if you don't want to go to automatic code generation, and maintain the code/model mapping manually (but you'll need to be meticulous!!))

Sounds to me like your best bet is (gulp!) likely to start from scratch if it's as horrifically broken as you make out. Is there any documentation? Could you begin to build some saner software based on the docs?
If a complete re-write isn't an option (and they never are in my experience) I'd try some of the following:
If you don't already have it, draw an architectural picture of the whole system. Sketch out how all the bits are supposed to work together and that will help you break the system down into potentially manageable / testable parts.
Do you have any kind of requirements or testing plan in place? If not, can you write one and start to put unit tests in place for the various chunks of code / functionality which exist already? If you can do that, you can start to refactor things without breaking as much of whatever does currently work.
Once you've broken things down a bit, start building your unit tests into integration tests which pull together more of the functionality.
I've not read them myself, but I've heard good things about these books which may have some advice you can use:
Refactoring: Improving the Design of Existing Code (Object Technology Series).
Working Effectively with Legacy Code (Robert C. Martin)
Good luck! :-)

Related

How to select the right architectural/design patterns

I am doing my own research project, and I am quite struggling regarding the right choice of architectural/design patterns.
In this project, after the "system" start, I need to do something in background (tasks, processing, display data and so on) and at the same be able to interact with the system using, for example, keyboard and send some commands, like "give me status of this particular object" or "what is the data in this object".
So my question is - what software architectural/design patterns can be applied to this particular project? How the interraction between classes/objects should be organized? How should the objects be created?
Can, for example, "event-driven architecture" or "Microkernel" be applied here? Some references to useful resources will be very much appreciated!
Thank you very much in advance!
Careful with design patterns. If you sprinkle them throughout your code hoping that everything will work great, you'll soon have an unreadable, boilerplate full mess. They are recipes, not solutions.
My advice to you is pick a piece of paper and a pencil and start drawing all the entities of your domain, with all their requisites, and see how they relate. If you want to get somewhat serious about it, you can do something like this.
When defining your entities, strive for high cohesion and loose coupling.
High cohesion means that you should keep similar functionalities together. In a very simple example, if you have a class that reads stuff from a file and processes it, the class has low cohesion, since reading and processing are two very distinct functionalities. In this case, you would want a class for each functionality.
As for loose coupling, it means that your entities should be independent of each other. Using the example above, supposed that you are now the proud owner of two highly cohesive classes - one that reads stuff from a file (Reader), and one that processes that stuff (Processor). Now, suppose that the Processor class has an instance of the Reader class, and calls it in order to get its input. In this case, we can say that both classes are tightly coupled, since Processor won't work without Reader. In the OOP world, the solution for this is typically the use of interfaces. You can find a neat example here.
After defining an initial model of your domain and gathering as much knowledge about it as you can, you can now start to think about the implementation's architecture. This is were you can start thinking about the architectural patterns. Event driven architecture, clean architecture, MVP, MVVM... It will all depend on your domain. It is your job to know which pattern will fit best. Spoiler alert: this can be extremely hard to do correctly even for experienced engineers, so don't be afraid to fail.
Finally, leave the design patterns for the implementation stage. Their use completely depends on your implementation problems and decisions. Also, DON'T FORCE THEM. Ideally, you will solve a problem and, IF APPLICABLE, you'll see a pattern emerging. Trust me, the last thing you want is to have a case of design patternitis. Anyway, if you need literature on patterns, I totally recommend this book. It's great no matter your level as an engineer.
Further reading:
SOLID principles
Onion Architecture
Clean architecture
Good luck!
You have a background task, and it can be used for a message pump/event queue indeed. Then your foreground task would send requests to this background thread and asynchronously wait for the result.
Have a look at the book "Patterns for Parallel Programming".
It is much better if you check a book for Design Patterns. I really like this one.
For example, if you need to get some data from a particular object, you may need the Observer Pattern to work for you and as soon as the object has the data, you (or another object) get to know this data and can work with it, with another pattern (strategy might work, it really depends on what you have to do).
If you have to do some things at the same time, check also the Singleton pattern (well, check the most important ones!).

Good design to build a whole program as a FSM?

I have built a parser using a FSM/Pushdown Automaton approach like here (and it works, well!): C++ FSM design and ownership
It allows me to exit gracefully and output a helpful error message to the user when something goes wrong at the parser stage.
I have been wondering about a good way to get that done in the rest of my program, and naturally, the parser approach popped in my mind...
I would make every object a state, which has a single event() function that has a switch statement calling object specific functions depending on the stage of execution I am. I can keep track of that with object-specific enum's, and keep the code more readable (case parser is more readable than case 5). This will allow me to close off the pushdown tree of states I have created (using the m_parent* approach in my other question).
Is this good design (forcing everything in a FSM-mode)? Is there a better way, and how much more complicated will it be (I find the FSM pretty easy to implement and test)?
Thanks for the suggestions!
PS: I know boost has about everything one may ever need, but I want to limit external dependencies, especially on boost. c++0x is ok though (but not really relevant here I think)
What you are doing is a bit like building a (simple) virtual machine in your programme. An FSM tends to be a good fit for some restricted problems such as lexing and parsing, and as you've probably noted, you can get quite a bit of logging and error management 'for free'.
However, if you try to apply the FSM pattern to everything (which is going to be tough for e.g. GUI programmes which contain quite a lot of state you normally wouldn't want to make into explicit states), you're going to realize that you also need facilities to debug your FSM (since the C++ debugger won't understand your states and events) and facilities to link and reuse states (since the states won't be OO level constructs). If you ever want to hand over your code to someone else, he or she is going to need additional training to use your FSM successfully. Are you going to want to keep one FSM engine for multiple applications? If so, how are you going to deal with versioning and upgrades?
Use the right tool for the right job. Every approach has its strengths and weaknesses. Your solution adds another layer of complexity: you can deal with logging and error handling in more C++-ish ways. If you're not happy with writing C++ code, you might consider other existing languages, rather than building an FSM language only you understand.
Most people would use inheritance instead of switch/case/default. However, the idea of forcing everything to be one way is inherently wrong. You should always approach each required functionality on it's own merits.
You can always take a look at boost.

Morphing multiple executables into a single application

I have three legacy applications that share a lot of source code and data. Multiple instances of each of these applications be executed by a user at any time, e.g. a dozen mixed application executions can be active at a time. These applications currently communicate through shared memory and messaging techniques so that they can maintain common cursor positioning, etc. The applications are written primarily in C++, use Qt and run in total to about 5 million lines of code. Only some of the existing code is threadsafe.
I want to consolidate these three executables into a single executable and use multi-threading functionality to allow multiple instance of each of the three functionality branches to execute at the same time. It has been suggested that I look into some of the features provided by Boost, e.g. shared pointers, and use OpenMP to orchestrate overall execution of the multiple threads.
Any comments on how to proceed will be appreciated, particularly references on the best way to tackle this kind of a refactoring problem.
My suggestion to you would be design the desired solution first (firstly assuming that the requirements are the same) and then build a phased migration path from the existing code base basing it on the requirements that third party functionality may introduce.
Refactoring should be one small step at a time - but knowing where you are going.
The Working Effectively with Legacy Code (Robert C. Martin Series) will be a good read I'd suggest.
Trust me (I've got the t shirts) don't attempt to refactor unless you know how to prove functionality - automated verification tests will be your saviour.
I don't think anyone will be able to give you an informed answer. There are too many things to take into considerations. Only after careful analysis of the processes involved and the code behind would anyone (you included) be able to provide you with a project path.
That said, there are however a few things you should take into consideration.
Your applications and the way they seem to communicate with each other seems to me an already sound solution. This is so, especially because you have 5 million lines of code to tackle with, if you choose to get into refactoring your current system. This is a strong deterrent. If there is a need to provide threading support in order to optimize current application messaging, I'd perhaps suggest you consider the possibility of introducing MT into the shared memory and messaging routines, instead of merging applications.
The act of merging your applications could be seen at first glance as the "simple" act of gluing your current code and removing the current routines responsible for memory sharing and messaging, replacing them with normal function calls to the glued objects. My gut tells me it will not be that simple. You have to consider that almost certainly your current backbone code was designed for the current messaging solution. Your current abstractions were thought of with this in mind. You will almost certainly find that you will also need to make significant changes to existing backbone code. And again here you will find that wall of 5 million lines a huge problem. Because changes will quickly propagate across your entire system and leave you with 5 million lines of code to handle and god knows how many new bugs.
I'd suggest 1 above. But it may be that you do need for some reason to consolidate your applications. That being so, I'd still suggest something else before you try 2. Create an interface application responsible for firing up, displaying and maintaining the current 3 applications. Your users will know no better if you do this right. You'll still be able to apply 1 to your current system and join your applications under a common interface, despite them being in fact separate entities.
"...about 5 million lines of code..." Hmm... It is hard to say for sure without knowing your system, but since it is a "legacy" system, you can probably gain a lot by removing code duplication. Check out Simian and CPD.
5 million lines is a lot ot code. Of course, you may need it, but my guess is that you don't.
You definitely need a good reason to change a change a code-base like that to be multi-threaded, especially in C++. Doing it without cleaning it up completely first is a recipe for disaster.

Overcoming bad habit of "fixing it later"

When I start writing code from scratch, I have a bad habit of quickly writing everything in one function, the whole time thinking "I'll make it more modular later". Then when later comes along, I have a working product and any attempts to fix it would mean creating functions and having to figure out what I need to pass.
It gets worst because it becomes extremely difficult to redesign classes when your project is almost done. For example, I usually do some planning before I start writing code, then when my project is done, I realized I could have made the classes more modular and/or I could have used inheritance. Basically, I don't think I do enough planning and I don't get more than one-level of abstraction.
So in the end, I'm stuck with a program with a large main function, one class and a few helper functions. Needless to say, it is not very reusable.
Has anybody had the same problem and have any tips to overcome this? One thing I had in mind was to write the main function with pseduocode (without much detail but enough to see what objects and functions they need). Essentially a top-down approach.
Is this a good idea? Any other suggestions?
"First we make our habits, then they make us."
This seems to apply for both good and bad habits. Sounds like a bad one has taken hold of you.
Practice being more modular up front until it's "just the way I do things."
Yes, the solution is easy, although it takes time to get used to it.
Never claim there will be a "later", where you sit down and just do refactoring. Instead, continue adding functionality to your code (or tests) and during this phase perform small, incremental refactorings. The "later" will basically be "always", but hidden in the phase where you are actually doing something new every time.
I find the TDD Red-Green-Refactor discipline works wonders.
My rule of thumb is that anything longer than 20 LoC should be clean. IME every project stands on a few "just-a-proof-of-concept"s that were never intended to end up in production code. Since this seems inevitable though, even 20 lines of proof-of-concept code should be clear, because they might end up being one of the foundations of a big project.
My approach is top-down. I write
while( obj = get_next_obj(data) ) {
wibble(obj);
fumble(obj);
process( filter(obj) );
}
and only start to write all these functions later. (Usually they are inline and go into the unnamed namespace. Sometimes they turn out to be one-liners and then I might eliminate them later.)
This way I also avoid to have to comment the algorithms: The function names are explanation enough.
You pretty much identified the issue. Not having enough planning.
Spend some time analyzing the solution you're going to develop, break it down into pieces of functionality, identify how it would be best to implement them and try to separate the layers of the application (UI, business logic, data access layer, etc).
Think in terms of OOP and refactor as early as it makes sense. It's a lot cheaper than doing it after everything is built.
Write the main function minimally, with almost nothing in it. In most gui programs, sdl games programs, open gl, or anything with any kind of user interface at all, the main function should be nothing more than an event eating loop. It has to be, or there will always be long stretches of time where the computer seems unresponsive, and the operating system thinks considers maybe shutting it down because it's not responding to messages.
Once you get your main loop, quickly lock that down, only to be modified for bug fixes, not new functionality. This may just end up displacing the problem to another function, but having a monilithic function is rather difficult to do in an event based application anyway. You'll always need a million little event handlers.
Maybe you have a monolithic class. I've done that. Mainly the way to deal with it is to try and keep a mental or physical map of dependencies, and note where there's ... let's say, perforations, fissures where a group of functions doesn't explicitly depend on any shared state or variables with other functions in the class. There you can spin that cluster of functions off into a new class. If it's really a huge class, and really tangled up, I'd call that a code smell. Think about redesigning such a thing to be less huge and interdependant.
Another thing you can do is as you're coding, note that when a function grows to a size where it no longer fits on a single screen, it's probably too big, and at that point start thinking about how to break it down into multiple smaller functions.
Refactoring is a lot less scary if you have good tools to do it. I see you tagged your question as "C++" but the same goes for any language. Get an IDE where extracting and renaming methods, extracting variables, etc. is easy to do, and then learn how to use that IDE effectively. Then the "small, incremental refactorings" that Stefano Borini mentions will be less daunting.
Your approach isn't necessarily bad -- earlier more modular design might end up as over-engineering.
You do need to refactor -- this is a fact of life. The question is when? Too late, and the refactoring is too big a task and too risk-prone. Too early, and it might be over-engineering. And, as time goes on, you will need to refactor again .. and again. This is just part of the natural life-cycle of software.
The trick is to refactor soon, but not too soon. And frequently, but not too frequently. How soon and how frequently? That's why it's a art and not a science :)

Need refactoring ideas for Arrow Anti-Pattern

I have inherited a monster.
It is masquerading as a .NET 1.1 application processes text files that conform to Healthcare Claim Payment (ANSI 835) standards, but it's a monster. The information being processed relates to healthcare claims, EOBs, and reimbursements. These files consist of records that have an identifier in the first few positions and data fields formatted according to the specs for that type of record. Some record ids are Control Segment ids, which delimit groups of records relating to a particular type of transaction.
To process a file, my little monster reads the first record, determines the kind of transaction that is about to take place, then begins to process other records based on what kind of transaction it is currently processing. To do this, it uses a nested if. Since there are a number of record types, there are a number decisions that need to be made. Each decision involves some processing and 2-3 other decisions that need to be made based on previous decisions. That means the nested if has a lot of nests. That's where my problem lies.
This one nested if is 715 lines long. Yes, that's right. Seven-Hundred-And-Fif-Teen Lines. I'm no code analysis expert, so I downloaded a couple of freeware analysis tools and came up with a McCabe Cyclomatic Complexity rating of 49. They tell me that's a pretty high number. High as in pollen count in the Atlanta area where 100 is the standard for high and the news says "Today's pollen count is 1,523". This is one of the finest examples of the Arrow Anti-Pattern I have ever been priveleged to see. At its highest, the indentation goes 15 tabs deep.
My question is, what methods would you suggest to refactor or restructure such a thing?
I have spent some time searching for ideas, but nothing has given me a good foothold. For example, substituting a guard condition for a level is one method. I have only one of those. One nest down, fourteen to go.
Perhaps there is a design pattern that could be helpful. Would Chain of Command be a way to approach this? Keep in mind that it must stay in .NET 1.1.
Thanks for any and all ideas.
I just had some legacy code at work this week that was similar (although not as dire) as what you are describing.
There is no one thing that will get you out of this. The state machine might be the final form your code takes, but thats not going to help you get there, nor should you decide on such a solution before untangling the mess you already have.
First step I would take is to write a test for the existing code. This test isn't to show that the code is correct but to make sure you have not broken something when you start refactoring. Get a big wad of data to process, feed it to the monster, and get the output. That's your litmus test. if you can do this with a code coverage tool you will see what you test does not cover. If you can, construct some artificial records that will also exercise this code, and repeat. Once you feel you have done what you can with this task, the output data becomes your expected result for your test.
Refactoring should not change the behavior of the code. Remember that. This is why you have known input and known output data sets to validate you are not going to break things. This is your safety net.
Now Refactor!
A couple things I did that i found useful:
Invert if statements
A huge problem I had was just reading the code when I couldn't find the corresponding else statement, I noticed that a lot of the blocks looked like this
if (someCondition)
{
100+ lines of code
{
...
}
}
else
{
simple statement here
}
By inverting the if I could see the simple case and then move onto the more complex block knowing what the other one already did. not a huge change, but helped me in understanding.
Extract Method
I used this a lot.Take some complex multi line block, grok it and shove it aside in it's own method. this allowed me to more easily see where there was code duplication.
Now, hopefully, you haven't broken your code (test still passes right?), and you have more readable and better understood procedural code. Look it's already improved! But that test you wrote earlier isn't really good enough... it only tells you that you a duplicating the functionality (bugs and all) of the original code, and thats only the line you had coverage on as I'm sure you would find blocks of code that you can't figure out how to hit or just cannot ever hit (I've seen both in my work).
Now the big changes where all the big name patterns come into play is when you start looking at how you can refactor this in a proper OO fashion. There is more than one way to skin this cat, and it will involve multiple patterns. Not knowing details about the format of these files you're parsing I can only toss around some helpful suggestions that may or may not be the best solutions.
Refactoring to Patterns is a great book to assist in explainging patterns that are helpful in these situations.
You're trying to eat an elephant, and there's no other way to do it but one bite at a time. Good luck.
A state machine seems like the logical place to start, and using WF if you can swing it (sounds like you can't).
You can still implement one without WF, you just have to do it yourself. However, thinking of it like a state machine from the start will probably give you a better implementation then creating a procedural monster that checks internal state on every action.
Diagram out your states, what causes a transition. The actual code to process a record should be factored out, and called when the state executes (if that particular state requires it).
So State1's execute calls your "read a record", then based on that record transitions to another state.
The next state may read multiple records and call record processing instructions, then transition back to State1.
One thing I do in these cases is to use the 'Composed Method' pattern. See Jeremy Miller's Blog Post on this subject. The basic idea is to use the refactoring tools in your IDE to extract small meaningful methods. Once you've done that, you may be able to further refactor and extract meaningful classes.
I would start with uninhibited use of Extract Method. If you don't have it in your current Visual Studio IDE, you can either get a 3rd-party addin, or load your project in a newer VS. (It'll try to upgrade your project, but you will carefully ignore those changes instead of checking them in.)
You said that you have code indented 15 levels. Start about 1/2-way out, and Extract Method. If you can come up with a good name, use it, but if you can't, extract anyway. Split in half again. You're not going for the ideal structure here; you're trying to break the code in to pieces that will fit in your brain. My brain is not very big, so I'd keep breaking & breaking until it doesn't hurt any more.
As you go, look for any new long methods that seem to be different than the rest; make these in to new classes. Just use a simple class that has only one method for now. Heck, making the method static is fine. Not because you think they're good classes, but because you are so desperate for some organization.
Check in often as you go, so you can checkpoint your work, understand the history later, be ready to do some "real work" without needing to merge, and save your teammates the hassle of hard merging.
Eventually you'll need to go back and make sure the method names are good, that the set of methods you've created make sense, clean up the new classes, etc.
If you have a highly reliable Extract Method tool, you can get away without good automated tests. (I'd trust VS in this, for example.) Otherwise, make sure you're not breaking things, or you'll end up worse than you started: with a program that doesn't work at all.
A pairing partner would be helpful here.
Judging by the description, a state machine might be the best way to deal with it. Have an enum variable to store the current state, and implement the processing as a loop over the records, with a switch or if statements to select the action to take based on the current state and the input data. You can also easily dispatch the work to separate functions based on the state using function pointers, too, if it's getting too bulky.
There was a pretty good blog post about it at Coding Horror. I've only come across this anti-pattern once, and I pretty much just followed his steps.
Sometimes I combine the state pattern with a stack.
It works well for hierarchical structures; a parent element knows what state to push onto the stack to handle a child element, but a child doesn't have to know anything about its parent. In other words, the child doesn't know what the next state is, it simply signals that it is "complete" and gets popped off the stack. This helps to decouple the states from each other by keeping dependencies uni-directional.
It works great for processing XML with a SAX parser (the content handler just pushes and pops states to change its behavior as elements are entered and exited). EDI should lend itself to this approach too.