TL;DR: I'd like to have a tool that receives an RESTful schema as input and provides a pyqt dialog/UI as output. Preferably with automatic submission/validation.
I'm working on a PyQt5 application that interacts with a remote Django server using django-rest-framework.
I find that I define most of my models/views/serializers quite quickly as they neatly extend one another. After writing a proper model, generating serializer and view is very easy and I end up with fully functioning server-side fast.
The client/GUI side is a different matter. I have to define the available fields again, their type and order. I have to define widgets for viewing a single object and a list of objects. I have to define edit interfaces and handle permissions.
This all seems like it could use some sort of automation.
Ideally, I could point a smart widget or form to a REST endpoint, and it'll automatically fetch the schema and allowed actions. Automatically create a GUI and the necessary error handling.
Ideally, this shouldn't depend on server side technology, and simply use a schema.
I've googled and couldn't find anything like that. Can someone point me at something similar? Is there a deeper issue with creating such a tool I'm missing?
Thanks!
Related
I am currently working on a medium-sized project that will need to utilize many form-like dialogs. I am developing this application using Qt5 widgets. (I am trying to implement a debugging tool for a class-based network protocol). Most of the logic behind the forms is very simple.
The view for a form would look like this:
Basically, when send is pressed it just constructs a packet using the data in the form, and would insert it into the message buffer to be sent out when appropriate later in the program. I want to utilize proper coding idioms when I develop this because I'm using this project to familiarize myself with GUI programming.
What concerns me, is that I don't know how to idiomatically structure my code in a way that is extensible, testable, and robust. I don't want my dialog to be directly responsible for inserting the data into the send stream, nor should it have to handle any business logic associated with it.
Naively I would imagine that the view should do very little logic other than communicate to some other part of the process that the user has edited something or pressed a button, perhaps it could validate that the text is in a proper format. The other part of the process would be what I imagine to be the 'model', therefore (I believe) following an MV architecture. This leads to several questions:
Most tutorials like this seem to want the user to implement a QAbstractListModel or a QAbstractTableModel, or perhaps even a QAbstractItemModel, but none of these seem needed or relevant to the type of data I am working with, furthermore, their interface seems to be very heavy-handed for what I think is simple data flow -- do I need to subclass one of these in order to properly fulfill the MV architecture or could I just manage the connections myself? If I manage the connections myself, should I create a presenter class to handle this and therefore implement an MVP architecture?
How should data be passed from this form to the rest of the application? I would prefer to avoid any/all global/static designs if plausible/correct. On send a packet should be constructed and inserted into the send buffer, but should that be done in the model for this dialog? Should a reference to the buffer or its controlling interface be provided and manipulated by this model? Should the relevant data be passed or returned to some outside model that would handle the buffer manipulation?
The data in these forms are basically 1 to 1 with the information needed to construct the messages for the send buffer, to the point that you could reasonably use or adapt the existing interfaces to be a functional model, however, I feel that this would be a code smell -- is that correct? Should I create a new class that basically mirrors my message class in order to have better separation of concerns?
Thank you all for any insight or resources that can be provided. Much of this is me overthinking the problem, but I would like to be sure that my design philosophy is sound before I implement 60+ dialogs so that this application can fully cover the protocol's standard.
I don't want my dialog to be directly responsible for inserting the data into the send stream
Exactly. And it should only be responsible for passing the data on to some service which is responsible for sending the message i.e., Separation of concern and single responsibility.
Most tutorials like this seem to want the user to implement a QAbstractListModel or a QAbstractTableModel, or perhaps even a QAbstractItemModel, but none of these seem needed or relevant to the type of data I am working with,
Is your data going to be represented in a table / list / tree. If yes, then you can use one of these / subclass them. Alternatively, you can use the QListWidget / QTreeWidget etc which don't use the model-view design. If no, then these are not for you obviously. It depends on the data and how you want to present it, and only you know about the data so you have to make that decision.
How should data be passed from this form to the rest of the application?
Using signal / slot mechanism. Take the form in the picture for example. The send button above shouldn't send anything. It should just accept the data entered into the form and then pass that data via a signal to some other service for example a MessageSender service or a ValidationService. That is all the form should do.
I would prefer to avoid any/all global/static designs if plausible/correct.
That is good and you should avoid them unless there is no other way. For example, a logging service is something that is needed everywhere in the program throughout the lifetime of a program, I would make it a singleton.
On send a packet should be constructed and inserted into the send buffer, but should that be done in the model for this dialog? Should a reference to the buffer or its controlling interface be provided and manipulated by this model?
Use signal / slots. A dialog should just take input, it shouldn't be sending data around or doing other things. It should take input and pass it on. Design your classes / objects with this in mind.
In my recent project I'd like to try out an Aurelia-frontend with a Django-backend.
I did some projects with Django and want to use Django REST API for my backend.
I'm new to Aurelia and read the documentation several times.
Now I'm wondering if it would be good practice to explicitly define models (eg. User with nickname, email, mobile, address etc.) in the Aurelia-frontend because in Django I already defined my models in the models.py for the database. Since I fetch/ the data via api to my Django application I could maybe omit it.
In the Aurelia "getting started"-section of the documentation they defined the ToDo-model in a separate file, but the data wasn't attached to a database. Doing this seems to me like doing it twice (in back- and frontend) and violates the DRY principle.
What would you think is good practice? Thanks for your recommendations!
Defining classes on the client side has its advantages. First, you can map the response data into a class instance, and work with the data that way. Though, working with a JSON object isn’t tough.
Second, serializing a class into JSON is easy. Plus, some backend frameworks expect a very specifically formatted JSON object; sometimes a class is the only practical way of doing that.
Third, one thing you can do with a class that you cannot do with a JSON object (as far as I know) is add methods/functions. That extensibility alone can be worth the effort.
It certainly isn’t unusual to have classes defined on the back and front end. I have worked with Aurelia, and Angular, they both work nicely with them. I have done an Aurelia app without client side classes. What I really missed there was no Intellisense (a fourth advantage) in the IDE since nothing was exported/imported. BTW, I use VS Code.
DRY is nice. But, showing intent can go a long way, especially if someone else picks up the code when you are done with it. Classes can help there. Fifth advantage, helps to show intent.
Finally, I am sure there are many more advantages.
Conclusion: I would recommend using client side classes. You will not regret it.
Hope this helps!
I am starting to evaluate frameworks for a HTML5 app. I really like the enyo model for developing an app. However, my app needs an object-relational mapper (ORM) for local storage and some way to update the UI based on changes in the ORM data.
It looks like Ember has some great linkages for the ORM and update parts.
Has anyone used these two together? Does it makes sense or do either of them (by themselves) already address the entire problem space?
Thanks in advance,
Charlie
I've not tried to integrate them directly yet, but I think that the Enyo event model would work well here. Have the ORM live as a top-level component in your application and have it broadcast data change messages into your tree of components using enyo.waterfall() or enyo.waterfallDown().
I do something similar in a cryptogram game I'm working on where I use that mechanism to broadcast information about the player's guesses into the view tree where individual cells use them to modify their display.
At my company we develop prefabricated web applications. While our applications work as-is in many cases, often we receive complex customization requests. We are having a problem in trying to perform this in a structured way. Generic functionality should not be influenced by customizations. At the moment we are looking into Spring Web Flow and it looks like it can handle a part of what we need.
For example, we have an Online Shopping and we have a request from a client that in a moment of checking out the Shopping Basket order has to be written to a proprietary logging system.
With SWF, it is possible to inherit our Generic Checkout Flow with ClientX Checkout Flow and to extend it with states necessary to perform a custom log write. This scenario seems to be handled well. This means we can keep our Generic Checkout Flow as is and extend it with custom functionality, according to Open/Closed principle. Our team in time can add functionality to the Generic Checkout Flow and this can be distributed to a client without modifying the extension.
However, sometimes clients request our pages to be customized. For example, in our Online Shopping app a client requests a multiple currencies feature. In this case, you need to modify the View as well as the Flow (Controller). Is there a technology that would let me extend the Generic View and not modify it? So far, only two solutions with majority of template-based views (JSP, Struts, Velocity etc.) seems to be
to have a specific version of view for each client. This obviously leads to implementation explosion
to make application configurable depending on parameter (if multipleCurrency then) that leads to code explosion - a number of configuration conditions that have to be checked in each page
What would be the best solution in this case? There are probably some other customization cases I am not able to recall. Is there maybe a component based view technology that would let me extend certain base view and does that makes sense.
What are typical solutions to a problem of configurable web applications?
each customization point implies some level of conditionality.
Where possible folks tend to use style sheets to control some aspects. For example display of a currency selector perhaps could be done like that.
Another thought for that currency example: 1 is the limiting case of many. So the model provides the list of currencies. The view displays a selector if there are many, and a fixed field if only one. Quite well-defined behaviour - easy to test reusable for other scenarios.
In the project there are two data sources: one is project's own database, another is (semi-)legacy web service. The problem is that admin part has to keep them in sync and manage both so that user doesn't have to know they're separate (or, do know, but they do not care).
Here's an example: there's list of languages. Both apps - project and legacy - need to use them. However, they both add their own meaning. For example, project may need active/inactive, and legacy will need language code.
But admin part has to manage everything - language name, active/inactive, language code. When loading, data from both systems has to be merged and presented, and when saved, data has to be updated in both systems.
Thus, what's the best way to represent this separated data (to be used in the admin page)? Notice that I use ASP.NET MVC / NHibernate.
How do I manage legacy data?
Do I connect admin part to legacy web service external interface - where it currently only has GetXXX() methods - and add the missed C[R]UD methods?
Or, do I connect directly to legacy database - which is possible since I do control it.
Where do I do split/merge of data - in the controller/service layer, or in the repository/data layer?
In the controller layer I'll do "var viewmodel = new ViewModel { MyData = ..., LegacyData = ... }; The problem - code cluttered with legacy issues.
In the data layer, I'll do "var model = repository.Get(id)" and model will contain data from both worlds, and when I do "repository.Save(entity)" it will update both data sources - in local db only project specific fields will be stored. The problems: a) possible leaky abstraction b) getting data from web service always while it is only need sometimes and usually for admin part only
a modification, use ICombinedRepository<Language> which will provide additional split/merge. Problems: still need either new model or IWithLegacy<Language, LegacyLanguage>...
Have a single "sync" method; this will remove legacy items not present in the project item list, update those that are present, create legacy items that are missed, etc...
Well, to summarize the main issues:
do I develop CRUD interface on web service or connect directly to its database (which is under my complete control, so that I may even later decide to move that web service part into the main app or make it use the main db)?
do I have separate classes for project's and legacy entities, thus managed separately, or have project's entities have all the legacy fields, managed transparently when saved/loaded?
Anyway, are there any useful tips on managing mostly duplicated data from different sources? What are the best practices?
In the non-admin part, I'd like to completely hide the notion of the legacy data. Which is what I do now, behind the repository interfaces. But for admin part it's not that clear or easy...
What you are describing here seems to warrant the need for an Anti-Corruption Layer. You can find solutions related to this topic here: DDD, Anti Corruption layer, how-to?
When you have two conceptual Bounded Contexts, but you're only using DDD for one of them, the Anti-Corruption layer comes into play. When reading from your data source (performing a get operation [R]), the anti-corruption layer will translate your legacy data into usable objects for your project. When writing to your data source (performing a set operation [CUD]), the anti-corruption layer will translate your DDD objects into objects understood by your legacy code.
Whether or not to use the existing Web Service depends on whether or not you're willing to change existing code. Sticking with DRY practices, you don't want to duplicate what you already have. If you want to keep the Web Service, you can add CUD methods inside the anti-corruption layer without impacting your legacy application.
In the anti-corruption layer, you will want to make use of adapters and facades to bring together separate classes for your DDD project and the legacy application.
The anti-corruption layer is exactly where you handle splitting and merging.
Let me know if you have any questions on this, as it can be a somewhat advanced topic. I'll try to answer as best I can.
Good luck!