We have a medium sized ColdFusion code base for our Intranet and Website. For most of the history of the code we have used hard coded links in the cfm's for where to go and for what 'save' code to tun.
In the last few years we've began using cfc's to handle more of the "navigational" code as well as more automated save code (implicitly calling the save process for a given cfc on init)
Assuming that it makes sense to begin using a framework, is it better to begin using it for newer projects or attempt a full scale conversion?
EDIT
To avoid confusion, I'm sensing that by moving to more cfc based code we are going down the path of accidentally creating our own framework. It seems to me that taking a proactive step toward using a proper framework and allowing the cfc's to process data is probably a wiser choice.
I'd only put the effort into a conversion if you were spending more than 10-20% of your time maintaining the project. (Your threshold my be lower or higher.) Other than that, use it just for new projects.
Why? I think the conversion is going to be painful, laborious and potentially a waste of valuable time.
"Assuming that it makes sense to begin
using a framework, is it better to
begin using it for newer projects or
attempt a full scale conversion?"
I would say that the biggest criteria for whether you should move to a framework are:
Am I spending a good amount of time maintaining current code, and is it difficult?
Am I repeating a lot of code? Do you find yourself writing a lot of the same thing over and over when adding to the current project?
Depending on how large the application is, it might be worth it to convert a current application to a framework if it saves you more time down the road by making maintenance easier and reducing code repetition for future additions to the current project. If you rarely maintenance the application except for a few tweaks here and there, then I would say leave it alone and use a framework only for new applications.
Frameworks have a short term cost and a long term gain.
When we start out without one we usually start building one over time indirectly to increase re-use of code, and make things more structured..
I have been a big fan of Fusebox, probably because I've just used it for so long.
What I have done in the past is, if I know the site will never be updated in into any real website functions, I just roll my own cfswitch to navigate between actions. each action I simply break down into the dsp act qry type files fusebox likes.
If ever I need to put it into fusebox, most of my circuits and actions are already done. The path forward is a bit easier.
On the other hand, if I know the client may want more in the future, I will just put it in a framework and leave it at that.
On a sidenote, I have also been checking out the very impressive ColdBox -- which seems to have some fantastic support, scalability and very well documented is cfc intensive... check them out too..
Have you considered using a framework such as fusebox instead of rolling your own? If you begin using a framework on new projects you might then find it easier to apply what you've learnt to existing projects.
Related
I'm asking the question after reading this article
http://stevehanov.ca/blog/index.php?id=95
Also isn't it a penalty to use cgi instead of fastcgi ?
Update: why some people do pretend like in answer "that you get 20-30% performance improvement" ? Is it pure guess or is this number coming from solid benchmark ? I have looked at HipHop performance is more in the scale of 10 times.
I've done webdev in a few languages and frameworks, including python, php, and perl. I host them myself and my biggest sites get around 20k hits a day.
Any language and framework that has reasonable speed can be scaled up to take 20k hits a day just by throwing resources at it. Some take more resources than others. (Plone, Joomla. I'm looking at you).
My Witty sites (none in production yet) take a lot more (from memory around 5000% more) pounding (using seige) than for example my python sites. Ie. When I hit them as hard as I can with seige, the witty sites serve a lot more pages per second.
I know it's not a true general test though.
Other speed advantages that witty gives you:
Multi threading
If you deploy with the built in websrever (behind ha-proxy for example) and have your app be multi-threaded .. it'll load a lot less memory than say a perl or php app.
Generally with php and perl apps, you'll have Apache fire up a process for each incoming connection, and each process loads the whole php interpreter, all the code and variables and objects and what not. With heavy frameworks like Joomla and Wordpress (depending on the number of plugins), each process can get pretyy humungous on memory consumption.
With the Wt app, each session loads a WApplication instance (a C++ object) and it's whole tree of widgets and stuff. But the memory the code uses stays the same, no matter how many connections.
The inbuilt Web2.0 ness
Generally with traditional apps, they're still built around the old 'http request comes in' .. 'we serve a page' .. 'done' style of things. I know they are adding more and more AJAXy kind of thigns all the time.
With Wt, it defaults to using WebSockets where possible, to only update the part of the page that needs updating. It falls back to standard AJAX, then if that's not supported http requests. With the AJAX and WebSockets enabled clients, the same WApplication C++ object is continually used .. so no speed is lost in setting up a new session and all that.
In response to the 'C++ is too hard for webdev'
C++ does have a bit of a learning curve. In the mid nineties we did websites in Java j2ee. That was considered commercially viable back then, and was a super duper pain to develop in, but it did have a good advantage of encouraging good documentation and coding practices.
With scripting websites, it's easy to take shortcuts and not realize they're there. For example one 8 year old perl site I worked on had some code duplicated and nobody noticed. Each time it showed a list of products, it was running the same SQL query twice.
With a C++ site, I think it'd have less chance because, in the perl site, there wasn't that much programming structure (like functions), it was just perl and embedded html. In C++ you'd likely have methods with names and end up with a name clash.
Types
One time, there was a method that took an int identifier, later on we changed it to a uuid string. The Python code was great, we didn't think we needed to change it; it ran fine. However there was little line buried deep down that had a different effect when you passed it a string. Very hard to track down bug, corrupted the database. (Luckily only on dev and test machines).
C++ would have certainly complained a lot, and forced us to re-write the functions involved and not be lazy buggers.
With C++ and Java, the compiler errors and warns a lot of those sorts of mistakes for you.
I find unit testing is generally not as completely necessary with C++ apps (don't shoot me), compared to scripting language apps. This is due to the language enforcing a lot of stuff that you'd normally put in a unit test for say a python app.
Summary
From my experience so far .. Wt does take longer to develop stuff in than existing frameworks .. mainly because the existing frameworks have a lot more out of the box stuff there. However it is easier to make extremely customized apps in Wt than say Wordpress imho.
From people I've spoken with who've moved from PHP to Wt (a C++ web framework) reported significant improvements. From the small applications I've created using Wt to learn it, I've seen it run faster than the same PHP type applications I created. Take the information for what you will, but I'm sold.
This reminds me how 20-30 years ago people were putting Assembly vs C, and then 10-20 years ago C vs C++. Of course C++ will be faster than PHP/Rails but it'll take 5x more effort to build maintainable and scalable application.
The point is that you get 20-30% performance improvement while sacrificing your development resources. Would you rather have you app work 30% faster or have 1/2 of the features implemented?
Most web applications are network-bound instead of processor-bound. Writing your application in C++ instead of a higher-level language doesn't make much sense unless you're doing really heavy computation. Also, writing correct C++ programs is difficult. It will take longer to write the application and it is more likely that the program will fail in spectacular ways due to misused pointers, memory errors, undefined behavior, etc. In general, I would say it is not worth it.
Whenever you eliminate a layer of interpretive or OS abstraction, you are bound to get some performance gain. That being said, the language or technology itself does not automatically mean all your problems are solved. I've fixed C++ code that took many hours to process a relatively simple set of records. The problem was in the implementation, and the fix was not related to the language's features or limitations.
Assuming things are all implemented correctly, you're sure to get better performance. The problem will be in finding the bugs. One of the problems with C++ is that many developers are currently "trained" or accustomed to having a lot of details related to memory management behind objects. This eliminates the need to consider things like, "What can happen if I pass this pointer around to several threads?" Sometimes it works well, but not always. You still have some subtleties of the language that you need to consider regardless of how the objects hide the nasty details.
In my experience, you'll need several seasoned C++ developers watching over the code to be able to keep the bugs and memory leaks from getting out of hand.
I'm certainly not sold on this. If you want a performance gain over PHP why not use a Java (or better yet Scala) framework? These are much better for web development, have nice, relatively easy to use frameworks and avoid a lot of the headaches of C++. I've always seen one of the main pluses of web-development (and most modern non-scientific/high performance applications) as being able to avoid the headaches that come along with C/C++ development.
Can anyone expound on disadvantages, if there are any, to using a ColdFusion development framework? I'm developing an application traditionally, and I'm tempted to use a framework having seen how simple some things can be done.
I'm new to ColdFusion and frameworks in general. I want to understand the implications of using a framework, including advantages and disadvantages.
Disadvantages:
learning curve (pick a lean framework to reduce this)
front controller makes ugly URL, often needs URL rewrite on web server layer
risk of framework being discontinued (no support, hard to maintain, break on new CF ver)
framework bugs (pick a popular framework with good & fast support)
harder to debug sometimes, since actions are generally not a .cfm anymore. Tip: make use of cfdump and cfabort to see the dump in the controller layer
some frameworks takes longer to reinit. Since most frameworks will cache the configurations and controller layer for performance, during the development phase, you'll need to reinit all the time. CF9 eases this problem 'cause it is much faster.
lastly, sometimes you'll be using framework's API, an abstraction from CFML, and missed out on the native ColdFusion way of solving the same problem.
Performance generally is a non issue. Don't worry.
Henry's already given a good answer, but I would just like to pick up on this part of your question:
But does it not come with a performance tax?
The performance overhead of a framework is negligible.
In fact, you may even get better performance from frameworks such as ColdBox, which have built-in caching.
Remember, most frameworks are mature codebases used by lots of people - most likely, your newly written untested code is going to be the culprit, not the framework.
However, as a general rule (not specific to frameworks) performance is not a problem unless you've got measurable results that say it is.
i.e. don't just think "I'm going to do X instead of Y because I think it'll be faster" - go with the simplest option that meets user's needs, and only change it if you can prove that it has a performance problem and that your proposed solution is better.
It depends the nature of project you are into. I think its always advisable to use a frameowrk for better code organization, scalability, conventions and other. If you are supposed to start with a enterprise level application then coldbox is the best framework as far as my expriece goes. It has a bigger learning curve but its worth learning. If its simple start up project then FW1 is good. You can find a list here
http://www.riaxe.com/blog/top-coldfusion-frameworks/
I work in a group that does a large mix of research development and full shipping code.
Half the time I develop processes that run on our real time system ( somewhere between soft real-time & hard real-time, medium real-time? )
The other half I write or optimize processes for our researchers who don't necessarily care about the code at all.
Currently I'm working on a process which I have to fork into two different branches.
There is a research version for one group, and a production version that will need to occasionally be merged with the research code to get the latest and greatest into production.
To test these processes you need to setup a semi complicated testing environment that will send the data we analyze to the process at the correct time (real time system).
I was thinking about how I could make the:
Idea
Implement
Test
GOTO #1
Cycle as easy, fast and pain free as possible for my colleagues.
One Idea I had was to embed a scripting language inside these long running processes.
So as the process run's they could tweak the actual algorithm & it's parameters.
Off the bat I looked at embedding:
Lua (useful guide)
Python (useful guide)
These both seem doable and might actually fully solve the given problem.
Any other bright idea's out there?
Recompiling after a 1-2 line change, redeploying to the test environment and restarting just sucks.
The system is fairly complicated and hopefully I explained it half decently.
If you can change enough of the program through a script to be useful, without a full recompile, maybe you should think about breaking the system up into smaller parts. You could have a "server" that handles data loading etc and then the client code that does the actual processing. Each time the system loads new data, it could check and see if the client code has been re-compiled and then use it if that's the case.
I think there would be a couple of advantages here, the largest of which would be that the whole system would be much less complex. Now you're working in one language instead of two. There is less of a chance that people can mess things up when moving from python or lua mode to c++ mode in their heads. By embedding some other language in the system you also run the risk of becoming dependent on it. If you use python or lua to tweek the program, those languages either become a dependency when it becomes time to deploy, or you need to back things out to C++. If you choose to port things to C++ theres another chance for bugs to crop up during the switch.
Embedding Lua is much easier than embedding Python.
Lua was designed from the start to be embedded; Python's embeddability was grafted on after the fact.
Lua is about 20x smaller and simpler than Python.
You don't say much about your build process, but building and testing can be simplified significantly by using a really powerful version of make. I use Andrew Hume's mk, but you would probably be even better off investing the time to master Glenn Fowler's nmake, which can add dependencies on the fly and eliminate the need for a separate configuration step. I don't ordinarily recommend nmake because it is somewhat complicated, but it is very clear that Fowler and his group have built into nmake solutions for lots of scaling and portability problems. For your particular situation, it may be worth the effort to master it.
Not sure I understand your system, but if the build and deployment is too complicated, maybe you could automate it? If deployment is completely automatic, would that solve the problem?
I don't understand how a scripting language would solve the problem? If you change your algorithm, you still need to restart calculation from the beginning, don't you?
It kind of sounds like what you need is CruiseControl or something similar; every time hyou touch the baseline code, it rebuilds and reruns tests.
We have an existing "legacy" app written in C++/powerbuilder running on Unix with it's own Sybase databases. For complex organizational(existing apps have to go through lot of red-tape to be modified) and code reasons(no re-factoring has been done in yrs so the code is spaghetti), so it's difficult to get modifications done to this application. Hence I am considering writing a new modern, maybe grails, based web app to do some "admin" type things directly into the database. For example to add users, or to add "constraint rows".
What does the software community think of this approach of doing a run-around of the existing app like this? Good idea? Tips and hints?
There is always a debate between a single monolithic app and several more focused apps. I don't think it's necessarily a bad idea to separate things - you make things more modular, reduce dependecies, etc.. The thing to avoid is duplication of functionality. If you split off an adminstration app separately, make sure to remove that functionality from the old app, or else you will have an unmaintained set of adminstration tools that will likely come back to haunt you.
Good idea? No.
Sometimes necessary? Yes.
Living in a world where you sometimes have to do things you know aren't a good idea? Priceless.
In general, you should always follow best practices. For everything else, there's kludges.
See this, from Joel, first!
Update: I had somewhat misconstrued the question and thought that more was being rewritten.
My perspective on your suggested "utility" system is not nearly so reserved as would be suggested by my link to Joel's article. Indeed, I would heartily recommend that you take this approach for a number of reasons.
First, this may well be the fastest route to your desired outcome since the old code is so difficult to work with.
Second, this gives you experience with a new development technology and does so in the context of your existing work - this is a real advantage.
Third, I took this approach years ago when transitioning an application from C++ to Delphi. In time, the Delphi app grew to be so capable that a complete leap onto that platform became possible. At no point were users without the functionality that they already knew because the old app wasn't phased out until the replacement functionality had been proven. However, it is at this stage that you'll want to heed Joel's warnings: remember that some of the "messiness" you see is actually knowledge embodied in the old code.
Good idea? That depends on how well the database is documented and/or understood. Make a mistake about some implicit application-level implemented rule, relation, or constraint, and your legacy app may end up doing cartwheels down the aisle.
Take a hypothetical example. Let's say adding a user with the legacy system adds records to the following tables:
app_users
app_constraints
app_permissions
user_address
Let's assume you catch the first three, miss the fourth. It can't be important, right? But what if in the app, in the 50 places that app_users is used, one place does an inner join to user_address. (And why not? The app writer knew that he always wrote a record to user_address!) The newly added user suddenly disappears from the application's view, a condition that "could never happen" according to the original coder, and the application coughs up a hair ball. Orders can't be taken. Production stops. A VP puts his new cardiac bypass surgery to the test.
Who gets blamed? Not the long-gone developer who should have coded for more exceptions. Not the guys who set up the red tape to control change. The guy that did an end run around those controls.
Good luck,
Terry.
I feel the need to refactor my old CF5 based code into CFC's. We already have some code in ColdSpring and Transfer but feel a large rewrite to ColdSpring and Transfer is pointless.
What tips, approaches and gotchas will I hit.
How can I make this easy?
I don't mind keeping ColdSpring in the mix but Transfer is the bit I'm scared of with the size of the project.
edit: my code base has been going for 7-8 years and is vast. To describe it would be difficult, however I'm looking for generic suggestions on approaches
Changing the whole code base just for the sake of it if it basically works would be introducing a lot of potential bugs into your system. I don’t think there is an easy way to do it.
If you look at the areas of your site which are 1: most likely to change and 2: executed the most you may be able to target some areas which could benefit from change and see how easily they would fit into a CFC based framework, and what benefits. But for most of the code if it is working OK, there may be no pressing need to change.
However whenever you need to do a major alteration to part of the system it may be worth looking at that from an OO perspective and moving the existing code over, where applicable.
In one of my ongoing projects (almost same situation, even more -- most of code is really bad) I am using technique I'd called "wave-style". General ideas I use are following:
Splitting processing from output. I can not implement true MVC here, but at least I can move view into separate templates (sometimes re-use them) and prepare all data in basic (model) templates.
Move all repeating code into components -- this is one of most important tips.
Group related functions into components. Say, all customer-related info grouped into CustomerManager.cfc, invoices into InvoiceManager.cfc etc.
Why "wave"? In a big project I can't just sit and rewrite all customer-related code. So I have make it step by step. For example, I have to work on customer signup, extend it with few attributes. I've created basic component, moved there methods to validate form (check login, email etc.) and add customer - so this page works in new style. Lated I will need to improve invoice page, where I need to get invoice owner details: I just add method into customer manager and get rid of direct queries. Later edit customer page... Also it can be called "on demand refactoring" or smth.
There can be additional stuff relying on your current project state. But it helped me a lot. Hope you'll find these tips useful.
Before you change anything: create a full set of regression tests!
When refactoring, the goal has to be preserve functionality first, so that you don't directly affect your clients.
I agree with Sergii's wave-style refactoring also - this allows you to break things into manageable chunks rather than doing everything in one go.
But whatever method you have, the more regression tests you can create, the better - it's really the only way you can confirm you haven't unintentionally changed something.
This is extremely hard (bordering on impossible) to answer without knowing any of your code.
The question is a bit like "I want to disassemble my old Volkswagen and build a new one from the parts, what should I consider?" :-)
My advice would be to start off by encapsulating your business logic into CFCs instead of worrying about the whole presentation layer of your site.
By just concentrating on the business logic, you'll be able to get the most important functionality into CFCs and ease the maintenance nightmare. It also won't be too hard to just "drop-in" these CFCs into your existing site.
After getting as much business logic into the CFCs as you can, you'll notice that the enormous monster has been cut down to size. At that point you can now decide on what you want to do with the presentation layer of your site. You're now free to pick from a multitude of frameworks available to use (CFWheels, FuseBox, ColdBox, Mode-Glue) to port over the presentation layer.
Or you could just say "the heck with it" and rewrite the whole thing in CFWheels from the start :)
If you are not using version control get that set up before you do anything else. Being able to back out of broken refactoring is a serious life saver. After that I agree with what has been posted. You will want to take on small chunks at a time - divide and conquer.