Better managing Coldfusion Component (CFC) functions - coldfusion

I heavily rely on CFC. Sometimes within an application, I would have multiple CFC containing dozens of functions per CFC. So over time, it's easy to forget or miss out on already created functions.
So my question is how do you guys manage all these functions? Do you keep a separate document listing all the functions and indexing them that way? Is there an automated feature built in that we can use?
What I've been doing is naming functions more meaningfully but it's very tedious. There has to be a better way to do this. Just looking for your thoughts.
Thank you in advance.

I don't think there's a magic bullet here. Programmers with a bit more OCD than I will likely respond and give you an iron clad solution. For me (or my team) I keep a library of common components in a folder that I reuse for various sites and applications. Then I add them as a /util or /lib folder for a given project and use them (or extend them) as needed. Good planning - good documentation (a Wiki is a great choice for a team) is a must.
Planning carefully whether to extend a CFC is especially important. Otherwise you have to chase down nested function that are part of some super class way down in the weeds (as in, this works, but I really have no idea why it works).
This is where frameworks can provide much needed structure. For common functions and events they generally provide a location and a convention for creating such things. That makes them easy to decipher (as long as you've been indoctrinated into the framework). They have some downsides but they make life a lot easier :)

-You should follow the proper naming conventions for each and every cfcs.
-Each cfcs should be meant for a particular purpose. i.e. login cfcs should should only contain the login related functions.
-All common functions should be kept together in a cfc and that can be extended by the other cfcs.
-You can use a generic cfc for random functions.
Now, if you want to add a new function for any functionality then you can scan only 3 cfcs i.e. dedicated to that functionality, common and random. And you add the new as per the best fit.

Related

Clojure module dependencies

I'm trying to create a modular application in clojure.
Lets suppose that we have a blog engine, which consists of two modules, for example - database module, and article module (something that stores articles for blog), all with some configuration parameters.
So - article module depends on storage, And having two instances of article module and database module (with different parameters) allows us to host two different blogs in two different databases.
I tried to implement this creating new namespaces for each initialized module on-the-fly, and defining functions in this namespaces with partially applied parameters. But this approach is some sort of hacking, i think.
What is right way to do this?
A 'module' is a noun, as in the 'Kingdom of Nouns' by Steve Yegge.
Stick to non side-effecting or pure functions of their parameters (verbs) as much as possible except at the topmost levels of your abstractions. You can organize those functions however you like. At the topmost levels you will have some application state, there are many approaches to manage that, but the one I use the most is to hide these top-level services under a clojure protocol, then implement it in a clojure record (which may hold references to database connections or some-such).
This approach maximizes flexibility and prevents you from writing yourself into a corner. It's analagous to java's dependency injection. Stuart Sierra did a good talk recently on these topics at Clojure/West 2013, but the video is not yet available.
Note the difference from your approach. You need to separate the management and resolution of objects from their lifecycles. Tying them to namespaces is quick for access, but it means any functions you write as clients that use that code are now accessing global state. With protocols, you can separate the implementation detail of global state from the interface of access.
If you need a motivating example of why this is useful, consider, how would you intercept all access to a service that's globally accessible? Well, you would push the full implementation down and make the entry point a wrapper function, instead of pushing the relevant details closer to the client code. What if you wanted some behavior for some clients of the code and not others? Now you're stuck. This is just anticipating making those inevitable trade-offs preemptively and making your life easier.

What is the security or performance risk in including all of UDF's on every page request in ColdFusion?

I currently have my model and controller cfc's extend a cfc, via a Proxy component, that has all my functions. This makes all those UDF's available everywhere in the app. Are there security or performance risks in doing this? I use my models in an OO fashion. Including these functions like this makes them a part of each object as well. Specifically, could that cause performance issues when creating a lot of objects in a single page request?
I'm using CF 9 and 10 with this framework.
Thanks!
I think this is less a security/performance concern and more a... "ick" concern. ;) CF, especially in 9 and 10, has very fast performance, especially after the first hit. I remember doing some testing with including a 1000 or so UDFs and after the initial hit it was pretty much instantaneous. That being said, just because you can do it doesn't mean you should do it. The Application.cfc is specifically built as a way to manage your Application and how it acts. Your model/service/controller CFCs should not be extending them. The normal rule for extension is "Does A pass the 'Is A' test for B?" and your CFCs definitely are not "Is A" Application.cfc type.
What I'd recommend instead is to create a CFC for your utility functions. Heck, call it util.cfc. Then if your model or other CFCs need these functions, inject them in via a setter (and something like ColdSpring could make this even easier).

Is it better to have lot of interfaces or just one?

I have been working on this plugin system. I thought I passed design and started implementing. Now I wonder if I should revisit my design. my problem is the following:
Currently in my design I have:
An interface class FileNameLoader for loading the names of all the shared libraries my application needs to load. i.e. Load all files in a directory, Load all files specified in a XML file, Load all files user inputs, etc.
An Interface class LibLoader that actually loads the shared object. This class is only responsible for loading a shared object once its file name has been given. There are various ways one may need to load a shared lib. i.e. Use RTLD_NOW/RTLD_LAZY...., check if lib has been already loaded, etc.
An ABC Plugin which loads the functions I need from a handle to a library once that handle is supplied. There are so many ways this could change.
An interface class PluginFactory which creates Plugins.
An ABC PluginLoader which is the mother class which manages everything.
Now, my problem is I feel that FileNameLoader and LibLoader can go inside Plugin. But this would mean that if someone wanted to just change RTLD_NOW to RTLD_LAZY he would have to change Plugin class. On the other hand, I feel that there are too many classes here. Please give some input. I can post the interface code if necessary. Thanks in advance.
EDIT:
After giving this some thought, I have come to the conclusion that more interfaces is better (In my scenario at least). Suppose there are x implementations of FileNameLoader, y implementations of LibLoader, z implementations of Plugin. If I keep these classes separate, I have to write x + y + z implementation classes. Then I can combine them to get any functionality possible. On the other hand, if all these interfces were in Plugin class, I'd have to write x*y*z implementation classes to get all the possible functionalities which is larger than x + y + z given that there are at least 2 implementations for an interface. This is just one side of it. The other advantage is, the purpose of the interfaces are more clearer when there are more interfaces. At least that is what I think.
My c++ projects generally consists of objects that implement one or more interfaces.
I have found that this approach has the following effects:
Use of interfaces enforces your design.
(my opinion only) ensures a better program design.
Related functionality is grouped into interfaces.
The compiler will let you know if your implementation of the interface is incomplete or incorrect (good for changes to interfaces).
You can pass interface pointers around instead of entire objects.
Passing around interface pointers has the benefit that you're exposing only the functionality required to other objects.
COM employs the use of interfaces heavily, as its modular design is useful for IPC (inter process communication), promotes code reuse and enable backwards compatiblity.
Microsoft use COM extensively and base their OS and most important APIs (DirectX, DirectShow, etc.) on COM, for these reasons, and although it's hardly the most accessible technology, COM's not going away any time soon.
Will these aid your own program(s)? Up to you. If you're going to turn a lot of your code into COM objects, it's definitely the right approach.
The other good stuff you get with interfaces that I've mentioned - make your own judgement as to how useful they'll be to you. Personally, I find interfaces indispensable.
Generally the only time I provide more than one interface, it will be because I have two completely different kinds of clients (eg: clients and The Server). In that case, yes it is perfectly OK.
However, this statement worries me:
I thought I passed design and started
implementing
That's old-fashioned Waterfall thinking. You never are done designing. You will almost always have to do a fairly major redesign the first time a real client tries to use your class. Thereafter every now and then you'll discover edge cases of client use that require (or would greatly benifit by) an extra new call or two, or a slightly different approach to all the calls.
You might be interested in the Interface Segregation Principle, which results in more, smaller interfaces.
"Clients should not be forced to depend on interfaces that they do not use."
More detail on this principle is provided by this paper: http://www.objectmentor.com/resources/articles/isp.pdf
This is part of the Bob Martin's synergistic SOLID principles.
There isn't a golden rule. It'll depend on the scenario, and even then you may find in the future some assumptions have changed and you need to update it accordingly.
Personally I like the way you have it now. You can replace at the top level, or very specific pieces.
Having the One Big Class That Does Everything is wrong. So is having One Big Interface That Defines Everything.

Presenting MVC to Old C++ Spaghetti Coders?

I wish to present the idea of MVC to a bunch of old C++ spaghetti coders (at my local computer club).
One of them that has alot of influence on the rest of the group seems to finally be getting the idea of encapsulation (largely due in part to this website).
I was hoping that I could also point him in the right direction by showing him Model View Controller, but I need to do it in a way that makes sense to him, as well as it probably needs to be written in C/C++!
I realize that MVC is a very old architectural pattern so it would seem to me that there should be something out there that would do the job.
I'm more of a web developer, so I was wondering if anybody out there who is a good C/C++ coder could tell me what it is that made the MVC light switch turn on in your head.
Don't start off with MVC. Start off with Publish / Subscribe (AKA the "listener" pattern).
Without the listener pattern fully understood, the advantages of MVC will never be understood. Everyone can understand the need to update something else when something changes, but few think about how to do it in a maintainable manner.
Present one option after another, showing each option's weaknesses and strengths: making the variable a global, merging the other portion of code into the variable holder, modifying the holder to directly inform the others, and eventually creating a standard means of registering the intent to listen.
Then show how the full blown listener can really shine. Write a small "model" class and add half a dozen "listeners" and show how you never had to compromise the structure of the original class to add "remote" updates.
Once you get this down, move the idea into to the "model view" paradigm. Throw two or three different views on the same model, and have everyone amazed on how comparatively easy it is to add different views of the same information.
Finally discuss the need to manage views and update data. Note that the input is partially dependent on items which are not in the view or the model (like the keyboard and mouse). Introduce the idea of centralizing the processing where a "controller" needs to coordinate which models to create and maintain in memory, and which views to present to the user.
Once you do that, you'll have a pretty good introduction to MVC.
You might find it easier to sell them on the Document/View or Document/Presenter patterns. MVC was invented on Smalltalk where everything about the different UI elements had to be coded by the developer (as I understand, never used the thing). Thus the controller element was necessary because didn't have things like TextElement::OnChange. Now days, more modern GUI API's use Document/View but Document/Presenter is something I've seen proposed.
You might also consider reading Robert Martin's article on the TaskMaster framework.
You might also consider that any C++ developer who is not familiar with these patterns and already understands their purpose and necessity is either a complete newb or a basket-case best avoided. People like that cause more harm than good and are generally too arrogant to learn anything new or they already would have.
Get some spaghetti C++ code (theirs?), refactor it to use MVC, and show them what advantages it has, like easier unit testing, re-use of models, making localized changes to the view with less worry, etc.

How specific to get on design document?

I'm creating a design document for a security subsystem, to be written in C++. I've created a class diagram and sequence diagrams for the major use cases. I've also specified the public attributes, associations and methods for each of the classes. But, I haven't drilled the method definitions down to the C++ level yet. Since I'm new to C++ , as is the other developer, I wonder if it might not make sense go ahead and specify to this level. Thoughts?
edit: Wow - completely against, unanimous. I was thinking about, for example, the whole business about specifying const vs. non-const, passing references, handling default constructor and assigns, and so forth. I do believe it's been quite helpful to spec this out to this level of detail so far. I definitely have gotten a clearer idea of how the system will work. Maybe if I just do a few methods, as an example, before diving into the code?
I wouldn't recommend going to this level, but then again you've already gone past where I would go in a design specification. My personal feeling is that putting a lot of effort into detailed design up-front is going to be wasted as you find out in developing code that your guesses as to how the code will work are wrong. I would stick with a high-level design and think about using TDD (test driven development) to guide the low-level design and implementation.
I would say it makes no sense at all, and that you have gone too far already. If you are new to C++ you are in no position to write a detailed design document for a C++ project. I would recommend you try to implement what you already have in C++, learn by the inevitable mistakes (like public attributes) and then go back and revise it.
Since you're new, it probably makes sense not to drill down.
Reason: You're still figuring out the language and how things are best structured. That means you'll make mistakes initially and you'll want to correct them without constantly updating the documentation.
It really depends on who the design document is targeted at. If it's for a boss who is non-technical, then you are good with what you have.
If it's for yourself, then you are using the tool to help you, so you decide. I create method level design docs when I am creating a project, but it's at a high level so I can figure out what the features of the various classes should be. I've found that across languages, the primary functionalities of a class have little to do with the programming language we are working in. Some of the internal details and functions required certainly vary due to the chosen language, but those are implementation details that I don't bother with during the design phase.
It certainly helps me to know that for instance an authorization class might have an authenticate function that takes a User object as a parameter. I don't really care during design that I might need an internal string md5 function wrapper to accomplish some specific goal. I find out about that while coding.
The goal of initial design is to get organized so you can make progress with clarity and forethought rather than tearing out and reimplementing the same function 4 times because you forgot some scenario due to not planning.
EDIT: I work in PHP a lot, and I actually use PhpDoc to do some of the design docs, by simply writing the method signature with no implementation, then putting a detailed description of what the method should do in the method header comments. This helps anyone that is using my class in the future, because the design IS the documentation. I can also change the documentation if I do need to make some alterations while coding.
I work in php4 a lot, so I don't get to use interfaces. In php5, I create the interface, then implement it elsewhere.
The best way to specify how the code should actually fit together is in code. The design document is for other things that are not easily expressed in code. You should use it for describing the actual need the program fills, How it interacts with users, what the constraints are in terms of hardware and operating systems. Certainly describe the overall architecture of your application in a design document, but, for instance, the API should actually be described in the code that exposes the API.
You have already gone far enough with the documentation part. As you still a beginner in C++, when you would understand the language, you might want to change the structure of your program. Then you would have to do changes in the documentation. I would suggest that you have already gone too far with the documentation. No need to drill more into it
Like everyone else says, you've gone way past where you need to go with the design. Do you have a good set of requirements to the simple true/false statement level that you derived that design from? You can design all day long, but if you don't have requirements that simply say WHAT you're going to do, it doesn't matter how good your design is.