I've got 2 applications (lets call them AppA and AppB) communicating with each other.
AppA is sending objects to AppB.
There could be different objects and AppB does not support every object.
An object could be a Model (think of a game, where models are vehicles, houses, persons etc).
There could be different AppBs. Each supporting another base of objects.
E.g. there could be an AppB which just supports vehicle-models. Another AppB just supports specific airplane-models.
The current case is the following:
There is a BasicModel which has a position and an orientation.
If another user wants extra attributes, he inherits an ExpandedModel. And adds e.g. an attribute Color.
Now every user who needs additional attributes inherits from a more general model. After a while there is a VehicleModel which could activate windshield-wipers, an AircraftModel which could have landing lights or a PersonModel which could wave goodbye when a certain boolean is set to true.
The AppB always needs to be customized if it should support a new Model.
This approach has a big disadvantage: It's getting extremely complex after a few inheritations. Perhaps there are redundancies like an ExpandedAircraftModel which could use windschield-wipers too.
Another approach:
I create just one Model-class which has an attribute-list. The most easy implementation would be a std::map where the Key is the attribute-name and the Value is the attribute-value.
The user could now enter as much information as he wants. If he wants to use a windshieldwiper he just adds a "windshieldwiper - ON"-pair.
If AppB supports windshieldwipers it just looks if there is such an attribute in the list and reads the related value.
A developer of AppB needs to document well what attributes he supports. Every developer has to check if the specific attribute is already existing and how it is called (e.g. one developer could name his attribute windshieldwiper and another calls it windshield-wiper)
This could get extremely complex too and the only thing a user can relate to is the documentation or a specific standard-specification which has to be kept at a central space.
Finally, the question:
Which approach is better?
Did you see any additional disadvantages?
Is there a third approach which should be used instead of these two?
Just for a comparison, Google's Protocol Buffers uses a combination of both but leans hard toward your second example.
If you have distinctly different data that needs to be sent over the channel, you use the tool to generate a derivitive of the "message" class, but each message can contain other messages, and you can nest message definitions in themselves. When a message is sent out, the receiver checks the fields to determine what type of message it is and what fields are contained within.
The downside is that your code becomes overly verbose very quickly, as you can't really use inheritance to automate the process of acting on an incoming message, but the upside is that your protocol messages stay highly organized and easy to DEBUG since you're using a reflexive attribute list of sorts.
Related
I am new to Django and want to know deeper about the concept of signals.
I know how it works but really don't understand when should one really use it.
From the doc it says 'They’re especially useful when many pieces of code may be interested in the same events.'
What are some real applications that use signals for its advantage?
e.x. I'm trying to make a phone verification after user signup. Because it can be integrated inside the single app and the event that interested for the signal is only this 'verify' function, therefore I don't really need signal. I can just pass the information from one view to the other, rather than using pre_save signal from the registration.
I'm sorry if my question is kind of basic. But I really want to know some insight what is the real application, in which many codes interested in one particular event and what are some trade off in my application.
Thanks!!
Often signals is used when you need to do some database-specific low-level stuff. For example, if you use ElasticSearch for better searching documents on your site, you may want to automatically update search indexes, when new document is created or old one was edited.
Also you may have some complex logic of managing database objects. For example, you may need some specific logic of deleting object. For example, when user is deleted, you may want change all the links to his profile by some placeholder, or when new message is created or other action is performed by user, you want to update "last visited" field in user's profile and there's no direct relation between this action and updating the profile.
But when you're just implementing business-logic as in your example with verification, you don't need to use signals, because you don't need any universal logic related to deleting/creating/editing any object: you have a certain object with which you work and can do stuff directly.
Our Application has components which consume components with consume components of varying complexity. So i just want the input on the page, to validate when an object is set that the text is correct. The issue is that it is one of these subcomponents.
My colleague told me that there is 2 ways to do this, The first is to use Page Objects, and Chaining annotation to find it on my page, and then find the next id etc until my input is found. It requires me to look through another teams' Component Markup to narrow it down to the input i want to leverage. I dont believe I should have to go into another component definition, or a definition of a definition to get the appropriate chain to get this arbitrary input. It starts to create issues where if a lateral team creates changes unbeknownst to me, my PO will be broken.
The other option my friend asked was to use fixture.query to find the component. This would be as simple as:
fixture.query((el)=> el.attribute["id"] == "description",
(comp){
expect(comp.value, value);
});`
Using Query looks at the markup but then will automatically componentize it as the appropriate SubComponent. In this case, comp.value is the value stored in the HTML. So, if i did something like:
fixture.update((MainComponent comp) {
comp.myinput.value = new Foo();
});
Then I am setting and getting this programmatically, so i am a bit unsure if it properly would reflect what is on the screen.
Whats the best course of action? It seems PO would be better, but im not sure if there is a way around having to deep query for input boxes outside of the component i am testing.
Thanks
I don't think I have a definitive answer for you but I can tell you how we do it at Google. For pretty much any component we provide the page object alongside the component. This is twofold it is for testing that widget, and also so we can have this as a shareable resource for other tests.
For leaf widgets the page objects are a little less fleshed out and are really just there for the local test. For components that are shared heavily the page object is a bit more flushed out for reusability. Without this much of the API for the widget (html, css, etc) we would need to consider public and changes to them would be very hard (person responsible for making the public breaking change needs to fix all associated code.) With it we can have a contract to only support the page object API and html structure changes are not considered breaking changes. At times we have even gone so far as to have two page objects for a widget. One for the local test, and one to share. Sometimes the API you want to expose for a local test is much more than you want people to use themselves.
We can then compose these page objects into higher level page objects that represent the widget. Good page objects support a higher level of abstraction for that widget. For example a calendar widget would let you go to the next/previous month, get the current selected date, etc. rather than directly exposing the buttons/inputs that accomplish those actions.
We plan to expose these page objects for angular_components eventually, but we are currently working on how to expose these. Our internal package structure is different than what we have externally. We have many packages per individual widget (page_objects, examples, widget itself) and we need to reconcile this externally before we expose them.
Here is an example:
import 'package:pageloader/objects.dart';
import 'material_button_po.dart';
/// Webdriver page object for `material-yes-no-buttons` component.
#EnsureTag('material-yes-no-buttons')
class MaterialYesNoButtonsPO {
#ByClass('btn-yes')
#optional
MaterialButtonPO yesButton;
#ByClass('btn-no')
#optional
MaterialButtonPO noButton;
}
Consider that in the database you have a table called users and a table called wallets. Among other things a user has 0, 1 or more wallets. The relation is one to many, meaning that the wallet has a foreign key pointing at the user.
Now the question is the following: When building a struct or a class for a person I see two possibilities:
1) The user has no sign of wallet. There is a function which takes a user as arguments and will fetch an array of the wallets.
2) The user has as a member which is an array containing the wallets and the wallets are fetched when the object / struct is created.
I think that the first approach may be better, since it's more modular - in the second one the users depend on the wallets, even if the user has no wallets.
Still, I am not sure which approach is better so I am looking for a comparison of both approaches.
On the application level you might have a user type like this (Go notation):
type User interface {
Wallets() []Wallet
}
Far beneath, there's a database which in your case is SQL. That should not be obvious by looking at your application, thought.
Making assumptions about dependencies beyond what they guarantee in the form of interface contracts couples the components irreversibly.
That means that if you model your application by your database's schema you're doing it wrong because your entire application is now tightly coupled to said database and any change to any part of it will have a big, unpredictable impact.
A common solution is to use a so called ORM layer, which sits between your database driver and entity models. It will take care of stuff like:
how and when should the wallets be fetched?
where in the database is a wallet's information stored?
when you remove a user, should the wallet also be deleted?
among other things.
PS: This answer applies to both, statically and dynamically typed languages.
I would like to dynamically build a form to edit a set of properties (say from a xml file or so).
On top of that, I would like to perform validation for each property (mandatory values/optional values) with a set of rules (ideally also dynamically loaded).
These rules could be associated to a single field (allowed values, range, ...) but could also link several fields (conditional validation).
I would like to be able to save the results "on the fly" (as soon as a field loses focus).
Does someone have a good lead to get me started?
Here is what I found so far:
I could start from the Qt property browser framework for the dynamic form generation. I could extend this framework to suit my needs.
Regarding the validation, I read about QValidator which seems to be a good start. However, I couldn't find anything involving several fields (cross-parameter validation)
The QSettings framework does this auto-save feature quite nicely and I guess I could reuse that.
I just wanted to be sure I am not missing some existing framework to deal with my goals since
it seems like a relatively standard thing to do.
Assuming that the fields of the form are fixed. Then you could use a shared instance of a QValidatorto validate the text in all the fields by running your validaton over a list /dictionary /map containing pointers to the fields. The list/*dictionary*/map will have to by dynamically populated and cleared, and a pointer to it hard-coded inside QValidate::validate. And if QValidator sharing is not allowed you will have to create individual ones and execute your cross-field validation.
Alternatively, you could use Qt's Signal-Slot mechanism to implement your validation whenever the text in your field is changed.
I had no idea of QSetting, and would have used the very same signal-slot mechanism to do the autosave.
I have a C++ application designed according to a classic Model-View-Controller pattern. The model is modified through a controller interface by an external source by means of a Command pattern. The commands are represented by an Action object (and its derivatives).
Now I want to be able to undo the modifications, but my problem is that I have no getters in my controller, only setters. This seems quite logical, since there's no reason someone should be able to get info about the model through the controller. Thus, I can't have my Action objects store the state of the Model, since they have no access to it.
How would one solve this? I'd like to keep my application as extendable as possible and I'm not quite sure which option is the best for that. The methods I though up so far are:
Putting getter methods in the controller. This seems to go against the MVC pattern.
Giving the Action a pointer to a View. The Action could then either:
Use individual getters to get the state of specific elements of the model to be modified.
Use a Memento method implemented by the Viewer.
Maybe there's an even better way to do this? Right now, to be the best option seems to be 2, suboption 1 (with suboption 2, I'd quite possible store a lot more state than necessary to undo one action).
Note: I know there's other questions on how to implement an undo action. However, the only answers I found gave suggestions to use a Command or Memento pattern. I know this is most probably the way to go. What I'm asking for is how to integrate this as cleanly & extendable as possible in an MVC design.
[Edit] What I don't like about the Memento pattern is that it forces me to store a complete state. Let's say my model is a 1000x1000 matrix and my Command is ChangeOneValueAtLocation. To be able to undo its changes, the ChangeOneValueAtLocation object only needs to store the previous value of the location it's changing, but that doesn't seem possible with Memento. The larger my model, the biggest this problem becomes.
[Edit 2] Another problem I have with Memento in the specific case of this application: for every method a Command object can execute on the Model, there's a method that does exact opposite (or can easily be coaxed to do so). This is why I would find it a waste to have to store the whole state, there should be no need to, reverting a single Command is very straightforward, the only problem is getting the data to be able to do it.
Also, I don't need to be able to undo a specific Command, only the topmost one on my history stack.
I also support the model layer containing undo support. There are quite a few ways to handle this in the model side. The first and most obvious is the models themselves remembering the history of the changes with "labels", but this is probably going to be difficult to synchronize for all your model classes.
One other option is to create a history manager that has a concept of a "transaction", which causes it to generate an undo point, and take a snapshot of your models, or start recording changes (for reduced memory usage), or record commands that cause model changes, etc.The models notify the manager on change, and finally you complete the transaction (or not, because the next start of transaction can be the end of the previous one). Once you add in the ability to rollback to a certain point, the work will be done. By making things slightly more complicated in this manager class, you can create an undo tree (like the one in emacs), so it is also quite a flexible way to approach it.
The above solution is not quite in the model layer, though. It is a support class that is driven by both the model and the controller. If you remove the transaction concept, then it is completely model-driven, but implementing the concept of an undo operation might be somewhat tricky. If you change it to act as a command proxy, it is the only entity used by your controllers, and is clearly a model. It is too rough a design at this point to choose one approach over another, but I am leaning towards the "transaction" model. It feels easy enough to implement.
I'd really recommend building the undo tree into your Controller
Building it into the model could run you into trouble:
the 'model' is usually fragmented per view (each view has it's own partial model)
this will lead to non-atomic undo (undoing part of an operation due to the view not knowing what other things (models) would have to be undone etc)
The controller is the 'action dispatcher', so it'd have to say
clone state (all models) snapshot
add action to history with reference to snapshot
run action
then undo would be
pop action off history stack (optionally push to 'future' stack)
restore snapshot
display view
Also, make undo work with highlevel actions (see Composite Pattern or Command Pattern)
Build the undo functionality in your model itself. Let you model keep a list of commands. Run the commands in the reverse order when your view passes an undo signal to the model.