Going to start a new Web app in Clojure, going to use the following libraries as a base to start with but would like to hear of others instead or in addition:
Om - UI
Secretary - Client side navigation (replace with Bidi or Silk?)
Sente - Client/Server communication
Ring/Compojure/HttpKit - Web server
Component - Application architecture/modularity/reloaded pattern (Something else here?)
Friend - Authentication/Authorization (Buddy instead?)
Liberator - REST web services (Any alternative?)
Figwheel, Weasel, Reloaded Pattern - Development sugar
This answer really depends on what you actually need. There's no such thing as an overall combination of libraries that you should always use as a starting point for a project. You haven't specified your requirements, so we can't say what you should put into your project.
Here are some general observations:
Many Om apps are single-page apps, loaded from a single URL. If that's your case as well, you don't need Secretary. If you have multiple URLs all using the same compiled clojurescript file, then that's where Secretary comes into the picture. But if you are not using :advanced optimizations with clojurescript, then you also don't need Secretary since you can explicitly load a particular namespace on each page. Depends on the level of complexity and deployment you are desiring.
You mention Http Kit, do you have specific needs for that over Jetty, which tends to be much more common? If it is because you want websockets, it is worth considering why you need websockets. They are cool but should be given consideration before adopting them as your default environment, in my opinion. Websockets are certainly not without their flaws, and most cases of server-browser communication are handled very easily, simply and in a more stable manner by using normal request/response and ajax, which has been around much longer and is something worth knowing and using, even if you later decide websockets are worthy of exploration. I would recommend cljs-ajax library for that purpose.
You should add the fogus EDN library to your list, since it is Ring middleware that will handle data transfer between browser and client regardless of your preferred transport method.
Your choice for an authorization library really depends on the type of authorization you need. There isn't a one-size-fits-all in that area.
Related
In REST URIs should be opaque to the client.
But when you build interactive javascript-based web-client for your application you actually have two clients! One for interaction with server and other one for users (actual GUI). Of course you will want to have friendly URIs, good enough to answer the question "where am I now?".
It's easier when a server just respond with HTML so people can just click on links and don't care about structure. Server provides URIs, server receives URIs.
It's easier with desktop client. The same staff. Just a button "show the resource" and user doesn't care what the URI is.
It's complicated with browser clients. There is the address bar. This leads to the fact that low-level part of web-client relies on the URIs structure of a server. Which is not RESTful.
It seems like the space between frontend and backend of the application is too tight for REST.
Does it mean that REST is not a good choice for reactive interactive js-based browser clients?
I think you're a little confused...
First of all, your initial assumption is flawed. URI opacity doesn't mean URIs have to be cryptic. It only means that clients should not rely on URI semantics for interaction. Friendly URIs are not only allowed, they are encouraged for the exact same reason you talk about: it's easier for developers to know what's going on.
Roy Fielding made that clear in the REST mailing list years ago, but it seems like that's a myth that won't go away easily:
REST does not require that a URI be opaque. The only place where the
word opaque occurs in my dissertation is where I complain about the
opaqueness of cookies. In fact, RESTful applications are, at all
times, encouraged to use human-meaningful, hierarchical identifiers in
order to maximize the serendipitous use of the information beyond what
is anticipated by the original application.
Second, you say it's easier when a server just respond with HTML so people can just follow links and don't care about structure. Well, that's exactly what REST is supposed to do. REST is merely a more formal and abstract definition of the architecture style of the web itself. Do some research on REST and HATEOAS.
Finally, to answer your question, whether REST is a good choice for you is not determined by client implementation details like that. You can have js-based clients, no problem, but the need to do that isn't reason enough to worry too much about getting REST right. REST is a good choice if you have projects with long term integration, maintainability and evolution goals. If you need something quick, that won't change a lot, or won't be integrated with a lot of different clients and services, don't worry too much about REST.
I have an interesting conundrum. I have been challenged with identifying the most suitable process in which to create a "browser front-end" to an existing multi-user application built within the TigerLogic/Pick D3 environment. My research indicates there are many ways to do this; but I am struggling to decide which method is best or where to start. I have "played" with a few technologies but a commitment to one is needed to get started.
These methods include:
Creation of a complex webservice using MVS Toolkit; and engineering a client from WSDL either from scratch or using maven/wsimport. Tests indicate there is a lot more to this process than originally though for a simply WSDL.
Development of a Java based web app that harnesses the MVSPJavaAPI - I am not a JAVA developer so this means learning a new language. Development would most likely take place in Eclipse.
Using TigerLogics FlashCONNECT - resulting in additional expenditure to clients so not preferable - and more or less ruled
out.
There is also the .NET option - but I have ruled this out on the basis of needing portability.
My question is, has anyone else out there done anything like this and could you share your experience? My first task is to build a web-app that will reliably give me the D3 TCL prompt in a browser that I can customise.
I am not sure there is a definitive answer here but would like peoples thoughts and will label the most useful as the answer.
What path you choose depends in some part on your existing skill-set and whether that fits in with your portability needs. It is very difficult to give you a concrete answer to your question becaue of not knowing in which part of the chain you need the portability.
It is however possible to develop a web-browser front-end using .NET which will run on Linux or Windows, so I don't see an issue with portability here. Your web server will have to be windows based but it shouldn't matter whether D3 is running on Linux or Windows at the server-end, or whether the client desktops are running Linux or Windows.
You could try TigerLogics MVSP .NET API but I do not know if it has the power to deliver based on your needs. I believe you may find mv.NET from Bluefinity could fulfill your needs. This is in my opinion the leading product on the MultiValue market for achieving the goals you have in mind. This will mean spending money of course. For this you will get a very powerful set of tools. Also, the cost of investing in a good tool could end up smaller than the cost in terms of time, effort and potential complications of trying to do this piecemeal without spending any extra money. I am sure Flashconnect would also do the job. You would have to weigh up the cost of the different options to find out which one is right for you both technically and financially.
Not knowing whether or not you have .NET in your skill-set, I don't know whether the .NET option would be easier for you. It is however technically possible.
I would suggest using Rocket's D3 (formerly TigerLogic D3) .NET APIs and create a Web API RESTful service that you can consume with JavaScript in any other web technology and if you need to call from a D3 subroutine (in case you would) then use the MVS Toolkit.
Requirements though are D3 9.0 or later.
I've used all of the technologies described, and many more to interface with D3. I agree with #Glenn and will add... I understand you're edging away from .NET. That's fine, you don't need it. But consider that most LAMP implementations separate the DBMS servers from the web servers. That topology introduces short delays between the tiers but decouples them in case you want to use multiple web servers or multiple databases - a common topology even with D3 / MV.
I have a client where we have a Java/Grails front-end over Linux, with all data queries filtered through a single, elegant data provider class that's abstracted from application logic. That uses a web service call which I wrote in Java, calling to a .NET web service. The service is easily generated/modified, as is the client from the WSDL. From there IIS carries inbound queries to D3 via mv.NET, and at this point it doesn't matter if the D3 DBMS is in Linux or Windows. My web service could have as easily been in Linux with Java but it would then lack a pooling mechanism - see below.
If you want all Linux then you can go with the MVSP Java library. TigerLogic (now acquired by Rocket Software) committed to a PHP binding for MVSP some months ago. Rather than wait, one of my clients created a PHP wrapper around mv.NET, though MVSP is as easy. So the resulting application is essentially LAMP, but with the M = Multivalue. I have written code like this too - we can write a wrapper in any language which exposes a useful API and abstracts both connectivity method and OS dependencies. In other words it doesn't matter what languages we want to use or what OS's are involved. That part is rather trivial and subject to change later. It's better to focus on the application than the communications.
You can also go off the menu, so to speak, and create your own Java/PHP wrapper around the OS-level d3tcl command, which is a script/wrapper around the d3 executable. This allows you to open a connection yourself and pass in commands.
Whatever option you select, you need to consider that opening and closing a DBMS connection is a slow process. You do not want to script a login around every data request. You do want to open a connection and keep that open persistently, while your client code accesses and releases that persistent connection as required. This is why we like mv.NET and FlashCONNECT. With MVSP and other mechanisms you need to create your own persistence model. You'll also need to manage a pool of connection resources - what happens when you get 10 simultaneous queries, or just 1 short one after one long one? You don't want queries to back up, you don't want to reject or timeout connections, and you don't want to fire up a connection for every client. You do want the proper number of DBMS sessions waiting for inbound connections. mv.NET and FlashCONNECT do this for you, the others do not.
Personally I'd shy away from FlashCONNECT. I was there for its initial development and testing and for years of end-user implementations. It's not as widely used as the other options and is more a tool for those who aren't familiar with other options. If you're talking about Java then you're probably not inclined to use FlashCONNECT. That said, if you have developers who are not familiar with anything outside D3 then FlashCONNECT is a decent server-side tool for them while someone else is focusing on the client-side with other technologies. Everyone should use their best skillset.
Finally, (already?) if someone is not familiar with external technologies, and more intimate with D3, then other options exist like DesignBAIS and Viságe, mostly removing the burden of communications and allowing developers to work on the client-side features and back-end rules in BASIC.
I discuss all of these topics plus mobile and telephony on my blog.
HTH
Note: I'm a backend (Java) developer by trade and work in Clojure in my spare time, so forgive me for my ignorance.
I'm trying to get my head around Clojurescript and how it could potentially fit in with projects I'm working on, or plan to work on in the future. As I've grown up with the "classic" web development mindset (e.g. Clojure running the backend, distributing data to the frontend via JSON to be processed in JS or returning a HTML page for the browser to render), I'm having trouble trying to understand how Clojurescript might make things better than this model.
Could anyone explain to me what the general approach to Clojurescript/Clojure development would be, seeing as the "Clojurescript One" project moniker signifies that application development will be unified under one language (as such)
What tasks would normally be done in the Clojurescript portion of the application?
What tasks would normally be done in the Clojure (e.g. backend) portion of the application?
Any help would be appreciated, or if anyone can point me towards some diagrams or explanations or anything - that would be great too!
I think Clojure/ClojureScript applications will be structured very similarly to X backend technology + JavaScript.
One big benefit with architecting applications with Clojure and ClojureScript - a richer data format than JSON (you can represent hash-maps and sets with arbitrary keys) without losing compactness.
JavaScript is a fine, fine language but ClojureScript offers quite a few benefits. It's semantically simpler (functional), ships with a rich standard library, a robust battle tested application library (Google Closure), and all the benefits you get from the tasteful application of syntactic abstraction via macros.
That said, it's still very much alpha software and the tooling still needs a lot of work.
A bit of background about me, I have developed with Clojurescript, JQuery, Vaadin, Servlets, JSP, and many other web technologies.
1) Clojurescript is much harder to learn than any other web technology I have used as you need, Java, Clojure, Closure (with an s ;), Closure Lib, and Closurescript specific knowledge.
2) Clojurescript doesn't make sense for a small app. It only makes sense when you will have ALOT of client side processing
3) Clojurescript's only use as far as I see is as a better javascript (which is why it is better suited to larger apps) as the minifier part of Clojurescript is available for javascript too
4) Only the client end would be written in Javascript, the server would be in Clojure/Java servlets
Maybe Ganelon micro-framework (which incidentally I am author of) will suit your needs - the execution model is similiar to Vaadin's: server-side Clojure code pushes UI updates to the browser through AJAX/JavaScript, but we don't store application state in session by default.
The demo and docs are available at http://ganelon.tomeklipski.com/
For me, clojure and clojurescript offers cleaner code than mixed stack. There is only one language to think of and code is quite easy to read.
At the backend clojure does things that java usually would do. Input validation, saving to database and above all, implementing business logic. Our backend also validates incoming / outgoing data by types using prismatic schemas.
Frontend in short: We get pretty code using ClojureScript and it is fast to write. We are using ClojureScript version of material-ui when writing UI components. We have to write less code when compared to the JavaScript and I find our UI component code to be easier to read than JavaScript counterpart. One of the main reasons is shorter closing tags and less noise by coding language. Development with ClojureScript is quite fast.
Of course ClojureScript is used for simple imput validation like RegExp for phone numbers etc.
One of the disadvantages of clojure which you have probably noticed is long lines after giving proper names to the functions. I havent found silver bullet how to cope with that.
As dnolen said: ClojureScript is still developing. It is way better now than it was 6 months ago, so you'll have to keep checking its maturity now and then.
Systems which require communication between disparate applications is quite commmon.
What is the normal architecture for such applications? Am I right in thinking that web services are the usual tool (if so, what reasons why?).
Also, what other considerations are involved?
Thanks
There is no normal architecture and web services is certainly one way to link them up anda popular one. But the communications technllogy choice is a trivial part of this problem.
Who is communicating what, when and why, is the head scratcher, and of course we can't answer that in anything but the most general of terms.
You need to start with the type of interactions between the applications.
Just retrieving data from another aplication
Requesting another application to perform some function, no feedback of any kind required
Transactional. Do something with this and return a response (synchronous and asynchronous).
Are these applications yours? If so you should head towards SOA. ie you take the processing out of the application, make it a service and then request it. All your other apps then can also request it.
However you go there is no easy solution, because these applications were not designed to work togther.....
I know that some big players have embraced it and are actually exposing some of their services in APP compliant way, already. However, I haven't found many other (smaller) players in this field. Do you know any web application/service that uses APP as its public API protocol? What is your own take on AtomPub? Do you have any practical experiences using it? What are its limitations and drawbacks? Do you prefer AtomPub as your REST style or do you have some other favourite one? And why?
I know, these are many questions, not just one. The thing I'm interested here in is simple, though - how did the APP standard hit the market and particularly how does it seem with its adoption among web developers?
The company that I work for, is developing a lot of RESTful services.
However none of them expose public APIs.(In the sense that all services are internally consumed by our own clients). The reason why we went for REST architectural style was that we wanted our services to be easily consumable and more importantly scale well.
From my own practical experience I have come to the conclusion that HTTP + ATOM syndication format is a good idea, provided you want to keep things flexible(In terms of different content model, attaching and extending meta data associated with payloads, uniform parsing etc). ATOM ensures that everybody interprets the payload in an uniform manner without any scope for ambiguity.
However if one does not have any such complex requirements or does not forsee such requirements then the ATOM format could be a bit of an overhead. (For instance elements like Author,Title etc make sense more in the blogging/RSS world and may not make sense in your particular problem domain).
Also if the goal is to just serialize data structures at one end and reconstruct it at the other end, then most web frameworks(like WCF) have custom formats which are more appealing.
So in my opinion ATOM Pub is good if you need flexiblity in terms of data representation and if the playing field is huge with different kind of client.
However if you have a good knowledge of potential clients and server/client usage patterns then custom formats might be a good idea.
If the client is browser based then formats like JSON are very appealing.
Hope this answers your question.
My own research so far:
Wordpress supports AtomPub as its API protocol since version 2.3
GData is probably the biggest shot in the AtomPub field so far
Habari - new promising blogging system promotes APP as one of its main features
BlogSvc.net - an AtomPub
server, blog engine for .NET
platform, written in C#
Jangle - an open source project
designed to facilitate API access to
Library Systems
There's also mod_atom - an Apache module that stores entries in the filesystem.
Last time I checked (2007 or so) Atompub was fairly complex to implement. While you can whip together something that emits valid Atom feeds during the lunch break, implementing AtomPub was a fairly big undertaking.
That might have changed due to better libraries and tools but still it might be too complex to be implemented by smaller sides just because it's cool.
And the lack of killer AtomPub client applications puts little or no pressure on server operators to offer an AtomPub interface.