Clojure: how to architect desktop UI - clojure

I'm trying to design a desktop UI for schematics, layout, drawing stuff. Just looking for high level advice from actual software designers.
Assuming an in-memory "database", (clojure map of arbitrary depth for all user data, and possibly another one for application preferences, etc.), I'm examining how to do the model-view-controller thing on these, where the data may be rendered and modified by any one or more of:
A standalone text field that shows a single parameter, such as box width.
An "inspector" type of view that shows multiple parameters of a selected object, such as box width, height, color, checkboxes, etc.
A table/spreadsheet type of view that shows multiple parameters of multiple objects, potentially the whole database
A graphical rendering of the whole thing, such as both schematic and layout view.
Modifying any one of these should show up immediately in every other active view, both text and graphical, not after clicking "ok"... so no modal boxes allowed. If for some reason the table view, an inspector view, and a graphical rendering are all in view, dragging the corner of the box graphically should immediately show up in the text, etc.
The platform in question is JavaFX, but I'd like a clean separation between UI and everything else, so I want to avoid binding in the JFX sense, as that ties my design data very tightly to JFX Properties, increases the graininess of the model, and forces me to work outside the standard clojure functions for dealing with data, and/or deal heavily with the whole getValue/setValue world.
I'm still assuming at least some statefulness/mutability, and the use of built-in Clojure functionality such as the ability to add-watch on an atom/var/ref and let the runtime signal dependent functions.
Platform-specific interaction will rest tightly with the actual UI, such as reifying ActionListeners, and dealing with ObservableValues etc., and will attempt to minimize the reliance on things like JavaFX Property for actual application data. I'm not entertaining FRP for this.
I don't mind extending JFX interfaces or making up my own protocols to use application-specific defrecords, but I'd prefer for the application data to remain as straight Clojure data, unsullied by the platform.
The question is how to set this all up, with closest adherence to the immutable model. I see a few options:
Fine-grain: Each parameter value/primitive (ie Long, Double, Boolean, or String) is an atom, and each view which can modify the value "reaches in" as far as it needs to in the database to change the value. This could suck as there could potentially be thousands of individual values (for example points on a hand-drawn curve), and will require lots of (deref...) junk. I believe this is how JFX would want to do this, with giant arrays of Properties at the leaf nodes, etc., which feels bloated. With this approach it doesn't seem much better than just coding it up in Java/C++.
Medium-grain: Each object/record in the database is an atom of a Clojure map. The entire map is replaced when any one of its values changes. Fewer total atoms to deal with, and allows for example long arrays of straight-up numbers for various things. But this gets complicated when some objects in the database require more nesting than others.
Coarse-grain: There is just one atom: the database. Any time anything changes, the entire database is replaced, and every view needs to re-render its particular portion. This feels a bit like using a hammer to swat a fly, and a naive implementation would require everything to re-render all the time. But I still think this is the best trade off, as any primitive has a clear access path from the root node, whether it is accessed on a per-primitive level or per-record level.
I also need the ability for one data template to be instantiated many times. So for example if the user changes a symbol or shape which is used in multiple places, a single edit will apply everywhere. I believe this also requires some type of "pointer"-like behavior. I think I can store a atom to the model, then instantiate as needed, and it can work in any of the above grain models.
Any other approaches? Is trying to do a GUI editor-like tool in a functional language just stupid?
Thanks

I don't think is stupid to use a functional language to do a GUI editor-like tool. But I can't claim to have an answer to your question. Here are some links that might help you in your journey:
Stuart Sierra - Components Just Enough Structure
Chris Granger - Light Table: Explains how Light Table (source) is structured.
Chris Granger - The IDE as a Value: blog post related to the video above
Conal Elliott - Tangible Functional Programming: Using Functional Reactive Programming to create a composable UI, but his code is in Haskell.
Nathan Herzing & Chris Shea - Helping voters with Pedestal, Datomic, Om and core.async
David Nolen - Comparative Literate Programming: Shows all to use core.async to simplify UI programming in ClojureScript. The ideas here can be used in a desktop UI.
Rich Hickey - The Language of the System: Amazing talk about system programming by the creator of Clojure.
Erik Meijer has a good quote about functional vs imperative code:
...no matter whether it's Haskell, C# Java, F#, Scala, Python, PHP think about the idea of having a sea of imperative code that interacts with the outside world and in there have islands of pure code where you write your functions in a pure way.
But you have to decide how big the islands are and how big the sea is. But the answer is never that there are only islands or only sea. A good programmer knows exactly the right balance.

Related

QTableWidget vs QTableView

I am new to this Model/View Framework of Qt. In my application I want to have 1000 X 1000 cells. There should be minimum memory requirement & it should be fast. I don't know what this Model terminology is for. But I have my own class which knows how to deal with the double variables stored in the table. Currently I am using QLineEdit's with a Validator to create the array of cells. But it was way too slow for cells > 50 X 50. So I decided to go the good old MS Excel way.
So which Widget should I use: QTableWidget or QTableView?
And can anybody please explain in short what this Model/View framework is? I am not a Computer Science guy hence I am finding it tough to understand...
cmannett85's recommendation is a good one. Read the docs about a dozen times.
Then, if performance and memory issues are your primary concern and you think you can out-perform the QTableWidget implementation, then a QTableView interface on top of a QAbstractTableModel or QStandardItemModel is what you're looking for.
Since you're new to Qt's model-view architecture, I'd recommend using the QStandardItemModel until you feel like you're getting the hang of it. If your performance still isn't good enough, avoid a lot of the memory duplication and wasted objects by implementing your custom model. Plus, get yourself a good textbook and read its chapter on the model-view framework about 12 times. That section alone was worth its weight in gold, imho.
Here are the basics for Qt's custom model-view framework:
Your actual data is stored in a list/tree somewhere
The model provides a standard framework for queries to and edits for your data
Proxy models allow you to sort/filter your data without affecting the original model
The view provides a means to visually observe and interact with your data
Delegates (often optional) tweak the appearance of your data and provide custom editors to the data
If you're feeling both cheap and brave, check out this excerpt on implementing your own custom model. Work at it one function at a time and play with it as you go.
To understand the framework, start off with the documentation about it. It starts slow, but becomes moderately extensive and covers most of the classes involved.
QTableWidget or QTableView?
Once you have read the documentation you will see why this question doesn't really make any sense: a QTableWidget uses a QTableView to display the data. QTableWidget (along with QTreeWidget, etc.) uses the MVC framework, but it encapsulates it all to a handy package useful for most purposes, but if you need to do something different, you will have to crack it into it's component parts and reimplement the bits you need.

Is there some way to express OpenGL instructions in a language-independent way?

The title pretty much says it. I was thinking about making a simple video editor, and I was unsure about the "logistics" of various effects and filters and such. Let's say I want to make it possible for an external program to apply some effect to the image. Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL. Technically, that would be a program, but it seems more elegant and standardized than making a full fledged secondary program just to apply an effect. Maybe there's a better way?
Edit: Thanks for the answers guys. Here's a follow up though: How do other video editors implement this? The reason I ask is because the answers seem to be rather negative on the above point, so I was wondering how it is done by professional applications.
Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL.
There are several ways to approach the problem ("make video editor with ability to create custom effects").
Make your editor support plugins, and provide api for making new video effect plugins. Macromedia Director worked this way. New effect will have to be implemented as a plugin library (dll/.so, depending on your platform).
Embed scripting language with OpenGL bindings into your application for making video effects, provide basic functions for interacting with internal state of your application. This will allow faster development of effects, but performance will suffer greatly for certain operations. "Blender" (3d editor) works this way, Maya provides similar framework (I think). Basically, this is #1 but implemented as scripting language.
Make your editor load GLSL shaders. This will allow you to make some effects fairly quickly, but other than that it is not good idea - while GLSL has decent amount of "power" (functions/maths), GLSL alone won't allow you to make anything you want, because it won't be able to interact with your application at all (pure GLSL can't create textures, framebuffers, and so on, which will limit your ability to make something more interesting). Most likely you'll be only able to make some kind of filter, nothing more. #1 and #2 will give "power" to end user.
Implement node-based effect editor. I.e. there are several types of nodes user can drag around that represent certain operations, they have inputs/outputs and user can connect them. Blender 3D and UDK(Unreal Development Kit) have such feature. I think .kkreiger (site is down, google it) used similar technique to make 96kb 1st person shooter. This can be quite powerful, but programmers are very likely to hate dragging graphical entities around with mouse, unless you also provide the way to make such node graph via scripting language (AviSynth had something similar but not quite like that). This can be as powerful as #1 or #2, but will cost you extra development time.
Aside from that, there is no "language-independent" way to make effect. Either you have to use existing language to make effect plugin interact with your application, OR you have to make your own "language" to describe effect. The only language-independent way to deal with this is to hire a programmer and tell him what kind of effect you want. But then again, that requires natural language.
Let's say I want to make it possible for an external program to apply some effect to the image. Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL.
Either thing works. However in the OpenGL standard itself there's no such thing like "OpenGL instructions" or "opcodes". But at least in X11 based systems with indirect GLX this is possible, because GLX actually defines opcodes. And it is possible for multiple X clients to operate on the same context if it is indirect. Unfortunately you won't have that option most of the time, as you probably want a direct context, because you may want OpenGL3 (for which not all operations have opcodes defines, which makes indirect impossible for OpenGL-3), or because you're not using GLX.
So the next option is, that you provide the other process with some kind of command/interpreter prompt for OpenGL. If you're lazy I suggest you just embedd a Python interpreter in your program, together with the Python OpenGL bindings. Those operate on whatever context that's currently active, allowing the other program to actually send some Python script to do its stuff.
And last but not least you could provide OpenGL through some RPC interface.
Or you provide some plugin system, where you load some DLL, which does the deed.
Sure, but it would be basically the equivalent of you creating your own language. You can accept OpenGL "instructions" from the user via some text interface (or if you want to somehow put something together as a GUI), then parse those "instructions", and the underlying implementation would execute those instructions in whatever language your application is written in.
The short answer is no. What you are looking for is a HCI guy's dream DSL which would describe presentation regardless of the underlying technology.
Probably not quite what you wanted, you might want to look at GLSL. GLSL is a C-like language that is compiled by the graphic card driver into native "graphic assembly language".

Should I be using scripting in this circumstance?

I am writing a game. it is in c++ an direct x and I think I am making my own engine as I am programming from winproc up.
I have made large amounts of progress such as sound, collision detection, AI specific to my game settings, dynamic graphics creation from tile sheets.
after doing some research on event programming I have been told to "script it". I have researched this and as far as i see scripting is used so non programmers can add to the project.
I an a prety good programmer and I have no intention of having anyone else work on the game except on outside things such as graphics or map design. I already had a function where I can read in maps from csv files. these contain not just the tile layout but the npc data, entrances/exits. other files of the same nature control monster and item data so I can update the contents without a recompile.
So I am asking for the view of experienced programmers and maybe real world examples that relate to my circumstances as to why I should or should not use scripting in my game?
The NPC and monster data need to be able to contain some simple logic like "if the player has this item", "if the player has been to somewhere", "if the player is comming from the right" etc. And for this you'll need ability to define some composed actions and conditionals. That is called scripting and is best done by incorporating some existing script language interpreter like Lua, Python, some variant of JavaScript, whatever.
There are several reasons to use a scripting system in a game, not just as you pointed out, to allow non-programmers to add game content. The first game I worked on professionally used a scripting system and the only people who were writing scripts were the programmers who created the scripting system. There were two reasons they used it:
build times
New data files/script files can be tested quickly. This particular engine didn't support it, but some allow for script files to be reloaded while the game is running. The more times you can iterate through the build-deploy-test cycle, the more you will be able to tweek your game and end up with a much better game.
power of expression
A custimized scripting system can allow you to easily express/code concepts that are cumbersome in traditional programming. For example, the game engine I am working with now allows me to easily control concurrent animations in script.
I don't think scripting is entirely necessary in your case. The bonus behind adding scripting support an engine have been listed up and down this page.
It gives you a simple langue to work with
It is more accessible to someone without training
It can be compiled on the fly, hot
swapped at runtime
For lots of high end games with large teams these items are very important. But it really comes down to the goals for your engine. If you are not considering sharing your engine and plan to work on the game just yourself then scripting does loose some of its benefits. Adding a scripting language also has some down sides.
Add complexity and creation time to your engine
Increase debug time, even if the scripting language has a wonderful debugger, switching between 2 languages is never smooth.
As long as you still have a data driven design (keeping stats and types in separate files which can be swapped out during runtime) you retain the ability to hot-swap out data for rapid iteration.
definitivly add scripting. Its a pita to recompile the whole game, just because your player does 0.8 damage instead of 1.0. Also, If you want to have a door open under certain circumstances, you need it definitfly. I would do most of the game in a scripting language, Its just more flexible.
Big names are using scripting languages. For example Eve Online is written in stackless python. Other examples can be found here: http://wiki.python.org/moin/PythonGames

How to build a system for an editorial team

I'm developing a web portal that mostly works like a newspaper site. In the focus, there are articles, containing text, videos and images. These articles have attachments which shall be presented in a sidebar. These attachments might be the same objects that will be displayed within the body text.
I have been thinking a lot about how to create the structure and - and this is a major point - how to enable the editor to edit all this stuff comfortably.
What I evaluated were Django-CMS and feincms as complete systems, and several third-party-modules that do snippets of the work.
Now, I a have solution for inline objects: I forked the inline-module of django-basic-apps which is now able to take additional parameters for the objects to embed. Their parameters are an important thing to e.g. embed "an image with object id x, but max x pixels in size".
What is not solved with my approach is, to generate a sidebar containing a bunch of inline tokens. I could create a custom widget for this, though. A better solution would surely be to add a functionaly like somehow attaching generic objects (videos, images...) to an article object.
While my solution is working so far, I'm not sure if there are other ways to solve these common scenario, and I would like to hear some other experiences about this topic, and if there are any other ways you deal with it.
For there does not seem to be a bigger need of a solution for this generic problem, I will use my solution and see whether it proves in practice.
Take a look at Armstrong CMS. It's specifically designed to meet the needs of news organizations. It was developed out of the code that powers The Texas Tribune, a very large Django news site that won the Edward R. Murrow Award for best local news website in 2010.
Armstrong scales very well, is fast and can handle just about any kind of content you want to throw at it.

When i proceed to develop a software, ui design or database design, which should be first?

I tried to design the ui with some ui mocking software, but i found it's hard for me to settle down all the detail of the design, since the database didn't design yet.
But if i first design software, then the same problem occur, I didn't have the UI, how can I create a prominent UI ?
UI first.
Mock up an elegant and easy-to-use user interface (and workflow) thinking from the point of view of the user, and only then think about the underlying database / data structures you'll need to bring that UI to life.
If you can't design your UI because you haven't yet designed your database, you're doing it wrong IMHO. How many annoying pieces of software have you used that suffered from letting the database design drive the UI design?
Edit: As others have pointed out, you need to start with use cases / user stories. The UI design and database design, whichever order you do them, should only happen after you know what your software is trying to do, and for whom.
Edit by Bryan Oakley:
(source: gapingvoid.com)
Put the user at the place he deserves. Design UI first.
Database is only a consequence of user needs.
use cases first, neither ui nor database.
If you're trying to solve a problem in an object-oriented language, it's recommended that you start thinking about the objects involved. Don't worry about the database or the UI until you've got a solid domain model nailed down that addresses all the use cases.
You don't worry about the database or the UI at first. You can serialize objects to the file system if you need persistence and don't have a database. Being able to drive your app with a command line UI is a good exercise for guaranteeing that you have a good MVC separation.
Start with the objects.
UPDATE:
The one advantage that this approach has is that it doesn't prejudice the UI with a particular database design or vice versa. The object are agnostic about the other two layers. You aren't required to have a UI or relational database at all. You're just getting the objects right. Once you have that, you can create any UI or persistence scheme you like, confident that the domain model can handle the problem you've been asked to solve.
All the above answers address your issue in a right direction. That said, I would follow the SDLC thoroughly. It helps you understand the need for the solution for the problem at hand. Then comes the requirement gathering followed by the design either UI / underlying structure that supports the UI. It's a procedure but you would benefit in the end.
Your question is very subjective.
My opinion (and it is just that) is that database and underlying structure should come first. It can often help to put down the keyboard and mouse and compile some notes on paper.
Lay out goals like what you want your application to do, list the features you require and then start thinking about how you'll build it.
This method works for me in application design.
usually you need to manipulate some data in the solutions you develop. So you should consider how this data is organised first, stabilizing this layer is fundamental at the beginning. I agree with duffymo's comment about designing the business objects first if you are in a OO world. Mapping these objects to the DB will be a part of your work. Then you add business functionality and work on the presentation layer. Of course you will have to do some refactoring from time to time, but usually the refactoring impacts the business and presentation layers more than the database.
read this, it is a good technique.
DDD - Domain Driver Design
Would you build a house without a foundation? Database design isn't the fun part but it is the foundation of most business apps and if you get it wrong it becomes the most costly to fix and the most costly to maintain.
That said, I note that there is no reason you can't work on both together as they intertwine. But before you can do either, you need to understand the requirements and the business you are writing the app for.