Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I m not an experienced programmer so please bear with me. As a consequence I will need to be specific on my problem which is about building an architecture to represent a power plant hierarchy:
Indeed, I m trying to construct a flexible architecture to represent contracts and pricing/analysis for multiple type of power plants. I am reading the alexandrescu book about generic design patterns and policy classes as it seems to me a good way to handle the need for flexibility and extensibility for what I want to do. Let s detail a bit :
Power plant can have different type of combustible to run (be of different types) : coal or gas or fuel. Among each of those combustible, you can choose among different sub-type of combustible (ones of different quality or Financial index). Among those sub-types, contract formula describing the delivery can be again of different types (times series averaged with FX within or via a division,etc...) Furthermore, you can be in europe and be subject to emissions reduction schemes and have to provide co2 crédits (enters in the formula of your margin), or not which depend on regulatory issues. As well, you can choose to value this power plant using different methodology etc... etc...
Thus my point is you can represent an asset in very different way which will depend on regulation, choices you make, the type of contracts you agree with another counterparty, the valuation you want to proceed and CLEARLY, you don't want to write 100 times the same code with just a little bit of change. As I said in the beginning, I am trying to find the best programming techniques to handle my program the best way. But as I said, I m new in building software achitecture. It appears to me that Policy classes would be great to handle such an architecture as they can express the kind of choices we have to make.
However, putting it in practice makes me headache. I thought of a generic object factory where Powerplant* is my abstract type where functions like void price() or riskanalysis() would be pure virtual. Then I would need to make a hierachy based on this and derive elements
I couldn't really get what do you want, but I think you should learn programming before you want to do anything related to programming.
Learning is hard and takes a lot of time but it's worth. Also, more useful than asking and getting the answer without explaination. ;)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to make a state machine for a hardware device. It will have more than 25 states and I am not sure what design to apply.
Because I am using C++11 I thought of using OOP and implementing it using the State Pattern, but I don't think it is appropriate for the embedded system.
Should it be more like a C style design ? I haven't code it one before. Can someone give me some pointers on what is the best suited design ?
System information:
ARM Cortex-M4
1 MB Flash
196 KB Ram
I also saw this question, the accepted answers points to a table design, the other answer to a State pattern design.
The State Pattern is not very efficient, because any function call goes at least through a pointer and vtable lookup but as long as you don't update your state every 2 or 3 clock cycles or call a state machine function inside a time critical loop you should be fine. After all the M4 is a quite powerful microcontroller.
The question is, whether you need it or not. In my opinion, the state pattern makes only sense, if in each state, the behavior of an object significantly differs (with the need for different internal variables in each state) and if you don't want to carry over variable values during state transitions.
If your TS is only about taking the transition from A to B when reading event alpha and emitting signal beta in the process, then the classic table or switch based approach is much more sensible.
EDIT:
I just want to clarify that my answer wasn't meant as a statement against c++ or OOP, which I would definitly use here (mostly out of personal preference). I only wanted to point out that the State Pattern might be an overkill and just because one is using c++ doesn't mean he/she has to use class hierarchies, polymorphism and special design patterns everywhere.
Consider the QP active object framework, a framework for implementing hierarchical state machines in embedded systems. It's described in the book, Practical UML Statecharts in C/C++: Event Driven Programming for Embedded Systems by Miro Samek. Also, Chapter 3 of the book describes more traditional ways of implementing state machines in C and C++.
Nothing wrong with a class. You could define a 'State' enum and pass, or queue, in events, using a case switch on State to access the corect action code/function. I prefer that for simpler hardware-control state engines than the classic 'State-Machine 101' table-driven approach. Table-driven engines are awesomely flexible, but can get a bit convoluted for complex functionality and somewhat more difficult to debug.
Should it be more like a C style design ?
Gawd, NO!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am working on a project where I would like to achieve a sense of natural language understanding. However, I am going to start small and would like to train it on specific queries.
So for example, starting out I might tell it:
songs.
Then if it sees a sentence like "Kanye Wests songs" it can match against that.
BUT then I would like to give it some extra sentences that could mean the same thing so that it eventually learns to be able to predict unknown sentences into a set that I have trained it on.
So I might add the sentence: "Songs by
And of course would be a database of names it can match agains.
I came across a neat website, Wit.ai that does something like I talk about. However, they resolve their matches to an intent, where I would like to match it to a simplified query or BETTER a database like query (like facebook graph search).
I understand a context free grammar would work well for this (anything else?). But what are good methods to train several CFG that I say have similar meaning and then when it sees unknown sentences it can try and predict.
Any thoughts would be great.
Basically I would like to be able to take a natural language sentence and convert it to some form that can be run better understood to my system and presented to the user in a nice way. Not sure if there is a better stackexchange for this!
To begin with, I think SO is quite well-suited for this question (I checked Area 51, there is no stackexchange for NLP).
Under the assumption that you are already familiar with the usual training of PCFG grammars, I am going to move into some specifics that might help you achieve your goal:
Any grammar trained on a corpus is going to be dependent on the words in that training corpus. The poor performance on unknown words is a well-known issue in not just PCFG training, but in pretty much any probabilistic learning framework. What we can do, however, is to look at the problem as a paraphrasing issue. After all, you want to group together sentences that have the same meaning, right?
In recent research, detecting sentences or phrases that have the same (or similar) meaning have employed a technique known as as distributional similarity. It aims at improving probability estimation for unseen cooccurrences. The basic concept is
words or phrases that share the same distribution—the same set of words in the same context in a corpus—tend to have similar meanings.
You can use only intrinsic features (e.g. production rules in PCFG) or bolster such features with additional semantic knowledge (e.g. ontologies like FreeBase). Using additional semantic knowledge enables generation of more complex sentences/phrases with similar meanings, but such methods usually work well only for specific domains. So, if you want your system to work well only for music, it's a good idea.
Reproducing the actual distributional similarity algorithms will make this answer insanely long, so here's a link to an excellent article:
Generating Phrasal and Sentential Paraphrases: A Survey of Data-Driven Methods by Madnani and Dorr.
For your work, you will only need to go through section 3.2: Paraphrasing Using a Single Monolingual Corpus. I believe the algorithm labeled as 'Algorithm 1' in this paper will be useful to you. I am not aware of any publicly available tool/code that does this, however.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm designing a simple rule engine. Let me start by giving an overview.
The engine is initialized with a configuration file which specific the rules to be run and also parameters that are intended to be used by these rules.
Example:
I have a incoming order object and I will would like to do some sanity check on it like
the order quantity cannot be greater than some quantity X ( x is passed as a parameter to the engine. ) This is a simple example of a parameter being passed.
A complex example:
Some order Type.Some region.Some desk.Order Quantity = X
Some order Type.Some region.Some desk.Some trader.Quantity = y.
Some order Type.Some region.Some Product.Daily Volume = A
Some order Type.Some region.Some desk.Daily Volume = B
A lot of parameters like these are used to initialize the engine which are intended to be used by the rules.
Question:
How should these initialization parameters be passed to API ? -- JSON , XML ???
What is the best software design practice to represent and process and store these parameters so that the rules can use this information ( like what's the quantity allowed for a trader group ? to do sanity checks on the incoming order object )
I'm planning to implement this in C++
Thanks in advance
Before jumping into creating a new rules engine you should really be aware of the complexity involved e.g. Rete Algorithm this makes sense if you plan to maintain over thousands of rules because from those numbers and up if you evaluate the rules sequentially it becomes prohibitive performance-wise, and performance is particularly important in a trading system. I would actually research and explore reusing an existing rules engine e.g. CLIPS, drools, JRules etc.
Another promising possibility is to embed some sort of scripting language within your process (usually possible in e.g. Java) that can access your in-memory domain model e.g. embed a Python interpreter and use it as your "rules engine". If you absolutely must implement your own then you can use yacc and lex but last time I used it I remember it wasn't really fun and you have to be aware of the complexities i.e. scalability might become an issue if you plan to have thousands or more rules.
For rule management i.e. book keeping, editing, annotating, versioning etc you will want XML and put the actual rule under a CDATA element.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I parse/process data coming from many different streams (with different formats) and the number of different sources for data keeps growing in my system. I have a factory class which based on a config file specifying the source will give me the appropriate parser/processor pair (abiding to a small common interface) requested in something like this:
static Foo* FooFactory::createFoo(source c, /*couple flags*/)
{
switch (c)
{
case SOURCE_A:
{
//3 or 4 lines to put together a parser for A, and something to process stuff from the parser
return new FooA(/*args*/);
}
break;
//too many more cases which has started to worry me
default:
return NULL;
};
}
the problem is as the number of sources has grown I am facing two issues. First, when I build, I find myself pulling in all the FooA, FooB, FooC, FooD, FooE... relevant code - even if I was only interested in perhaps building a binary in which I'll only request FooA lets say. So how to go about modularizing that. A secondary issue is, right now in the case of SOURCE_A, I am returning FooA, but what if I am interested in SOURCE_A but I have different ways of parsing it and perhaps I want FooA_simple and FooA_careful but with the ability to plug and play as well?
For some reason, one thing that came to mind was the -u option to the linker when building a binary...it somehow suggests to me the notion of plug and play but I'm not sure what a good approach to the problem would be.
Well, you just create a factory interface and divide the logic among subtypes of that factory. So there might be a sub-factory (type/instance) for libFooA, and another for libFooB. Then you can simply create a composite factory depending on the subfactories/libraries you want to support in a particular scenario/program. Then you could even further subdivide the factories. You could also create factory enumerators for your composite types and do away with all that switch logic. Then you might say to your libFooA factory instance to enable careful mode or simple mode at that higher level. So your graph of FooFactory instances and subtypes could easily vary, and the class structure could be like a tree. Libraries are one way to approach it to minimize dependencies, but there may be more logical ways to divide the specialized sub-factories.
I'm not sure if you can get around importing FooA,FooB... because at any given moment any one of them might be instantiated. As for modularizing it, I'd recommend creating helper functions that gets called inside the switch statement.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am sorry - C++ source code can be seen as implementation of a design, and with reverse-engineering I mean getting the design back. It seems most of you have read it as getting C++ source from binaries. I have posted a more precise question at Understanding a C++ codebase by generating UML - tools&methology
I think there are many tools that can reverse-engineer C++ (source-code), but usually it is not so easy to make sense of what you get out.
Have somebody found a good methodology?
I think one of the things I might want to see for example is the GUI-layer and how it is separated (or not from the rest). Think the tools should somehow detect packages, and then let me manually organize it.
To my knowledge, there are no reliable tools that can reverse-engineer compiled C++.
Moreover, I think it should be near impossible to construct such a device. A compiled C++ program becomes nothing more than machine language instructions. In order to kn ow how that's mapped to C++ constructs, you need to know the compiler, compiler settings, libraries included, etc ad infinitum.
Why do you want such a thing? Depending on what you want it for, there may be other ways to accomplish what you're really after.
While it isn't a complete solution. You should look into IDA Pro and Hexrays.
It is more for "reverse engineering" in the traditional sense of the phrase. As in, it will give you a good enough idea of what the code would look like in a C like language, but will not (cannot) provide fully functioning source code.
What it is good for, is getting a good understanding of how a particular segment (usually a function) works. It is "user assisted", meaning that it will often do a lot of dereferences of offsets when there is a really a struct or class. At which point, you can supply the decompiler with a struct definition (classes are really just structs with extra things like v-tables and such) and it will reanalyze the code with the new type information.
Like I said, it isn't perfect, but if you want to do "reverse engineering" it is the best solution I am aware of. If you want full "decompilation" then you are pretty much out of luck.
You can pull control flow with dissembly but you will never get data types back...
There are only integers (and maybe some shorts) in assembly. Think about objects, arrays, structs, strings, and pointer arithmetic all being the same type!
The OovAide project at http://sourceforge.net/projects/oovaide/ or on github
has a few features that may help. It uses the CLang compiler
for retrieving accurate information from the source code. It scans the
directories looking for source code, and collects the information into
a smaller dataset that contains the information needed for analysis.
One concept is called Zone Diagrams. It shows relationships between classes at
a very high level since each class as shown as a dot on the diagram, and
relationship lines are shown connecting them. This allows
the diagrams to show hundreds or thousands of classes.
The OovAide program zone diagram display has an option call "Show Child Zones",
which groups the classes that are within directories closer to each other.
There are also directory filters, which allow reducing the number of classes
shown on a diagram for very large projects.
An example of zone diagrams and how they work is shown here:
http://oovaide.sourceforge.net/articles/ZoneDiagrams.html
If the directories are assigned component types in the build settings, then
the component diagram will show the dependencies between components. This
even shows which components are dependent on external components such as
GTK, or other external libraries.
The next level down shows something like UML class diagrams, but shows all
relations instead of just aggregation and inheritance. It can show
classes that are used within methods, or classes that are passed as
parameters to methods. Any class can be chosen as a starting point, then before
a class is added the diagram, a list is displayed that allows viewing
which classes will be displayed by a relationship type.
The lowest level shows sequence diagrams. This allows navigating up or down
the call tree while showing the classes that contain the methods.