Which is better for UDFs: CFC vs CFM - coldfusion

I've several logically-related UDFs in a single file in my application.
The question is that should the single file be a CFC file or a CFM file? and Why?
I've referred to several links as below but they explain more about how to go about implementing the solution. All I want is to know is which one is better - CFM or CFC?
How do you organize your small reusable cffunctions?
http://blog.adamcameron.me/2012/07/which-is-better-having-your-methods.html
Thanks for helping.

"Better" is subjective. If a collection of UDFs all work with the same data that you need to pass between them, they should probably be implemented as a CFC so one can have stateful objects so the data and the methods can be encapsulated in their own memory space.
If they're purely static methods, then an included library file might be fine.
INcluded UDFs pollute the variables scope individually, whereas functions in a CFC instance are accessed via the one object variable, so is a bit tidier.
If CFML had the concept of static methods, I'd always use CFCs, but as CFML doesn't have static methods, there's scope for justifying function libraries as well as CFCs.
Personally: I'd always use CFCs. They just seem more organised and more tidy.

Based on my experience, I would prefer CFCs. Take into consideration that most UDFs are just utility helpers so they only need to be created one time. Placing them in a CFC means you can load them into say the application scope and persist the CFC instance. The UDFs only get created one time for your application. Also, you could have your other CFCs extend this "utility" CFC so that the UDFs are available there as well.
Now with CFMs, anytime you include that template the UDFs get created again for that request. It is additional processing for something that really does not need it. Plus what was already mentioned around UDFs polluting the variables scope is another big reason to prefer CFCs.

Related

Is there a way to implement dynamic factory pattern in c++?

The DYNAMIC FACTORY pattern describes how to create a factory that
allows the creation of unanticipated products derived from the same
abstraction by storing the information about their concrete type in
external metadata
from : http://www.wirfs-brock.com/PDFs/TheDynamicFactoryPattern.pdf
The PDF says:
Configurability
. We can change the behavior of an application by just changing its configuration
information. This can be done without the need to change any source code (just change the descriptive information about the type in the metadata repository) or to restart the application (if caching is not used – if caching is used the cache will need to be flushed).
It is not possible to introduce new types to a running C++ program without modifying source code. At the very least, you'd need to write a shared library containing a factory to generate instances of the new type: but doing so is expressly rules out by the PDF:
Extensibility / Evolvability
. New product types should be easily
added without requiring neither a
new factory class nor modifying
any existing one.
This is not practical in C++.
Still, the functionality can be achieved by using metadata to guide some code writing function, then invoking the compiler (whether as a subprocess or a library) to create a shared library. This is pretty much what the languages mentioned in the PDF are doing when they use reflection and metadata to ask the virtual machine to create new class instances: it's just more normal in those language environments to need bits of the compiler/interpreter hanging around in memory, so it doesn't seem such a big step.
Yes...
Look at the Factories classes in the Qtilities Qt library.
#TonyD regarding
We can change the behavior of an application by just changing its configuration information.
It is 100% possible if you interpret the sentence in another way. What I read and understand is you change a configuration file (xml in the doc) that gets loaded to change the behaviour of the application. So perhaps your application has 2 loggers, one to file and one to a GUI. So the config file can be edited to choose one or both to be used. Thus no change of the application but the behaviour is changed. The requirement is that anything that you can configure in the file is available in the code, so to say log using network will not work since it is not implemented.
New product types should be easily added without requiring neither a new factory class nor modifying any existing one.
Yes that sounds a bit impossible. I will accept the ability to add ones without having to change the original application. Thus one should be able to add using plugins or another method and leave the application/factory/existing classes in tact and unchanged.
All of the above is supported by the example provided. Although Qtilities is a Qt library, the factories are not Qt specific.

Clojure automatically require files

I am trying to make little web framework in clojure. I have a bunch of clojure files in /handlers directory and I want to require all of them in my project's .core namespace. Every file defines its own namespace. For example: project.handlers.home. The idea behind this is when I add new handler, I don't want to modify namespace declaration in my core file to include it. The only solution I came up is to find all files in directory and load them with clojure.core/load, but it is far from beautiful and idiomatic. Is there a idiomatic way to do this?
Is there a idiomatic way to do this?
IMO, no. The idioms in Clojure usually favor being explicit over "doing magic", especially when it comes to naming global objects (which auto-loading namespaces clearly is).
I don't know why you wouldn't want to modify your "core" file when adding new handlers but you might consider introducing an additional namespace that loads both "core" and your handlers and hooks them together.
Noir included some functionality like this, making it an explicit API call to load namespaces under a particular directory. See load-views for an example of this. It used this to auto-load paths.
However, the Noir approach didn't feel idiomatic due to the amount of magic involved, as well as additional complications from the approach (e.g. lingering path definitions).
If you need to find namespaces from a tooling, framework, or library perspective, I would use find-namespaces in tools.namespace and then require/load them. This approach could be useful in terms of providing user level pluggability, where a user can drop a handler into a directory and then see new options in the code, though again being explicit tends to be significantly cleaner.

The best way to handle config in large C++ project

In order to start my C++ program, I need to read some configs, e.g. ip address, port number, file paths... These settings may change quite frequently (every week or everyday!), so hardcoding them into source files is not a good idea.
After some research, I'm confused about whether there is a best practice to load config settings from a file and made those configs available to other class/module/*.cpp in the same project.
static is bad; singleton is bad (an anti-pattern?) So, what other options do we have? Or, maybe the idea of "config file" is wrong?
EDIT: I have no problem of loading the config file. I'm worried about, after loading all those settings into a std::map< string, string > in memory, how to let other classes, functions access those settings.
EDIT 2: Thanks for everybody's input. I know these patterns that I listed here are FINE, and they are used by lots of programs. I'm curious about whether there is a (sort of) BEST pattern to handle configurations of a program.
Arguably, a configuration file is a legitimate use for a Singleton. The Singleton pattern is usually frowned upon because Singletons cause problems with race conditions in a multi-threaded environment, and since they're globally accessible, you run into the same problems you have with globals. But if your Singleton object is initialized once when you read in the config file, and never altered after that, I can't think of a legitimate reason to call it an "anti-pattern" other than some sort of cargo-cult mentality.
That being said, when I need to make a configuration file available as an object to my application, I don't use a Singleton. Usually I pass the configuration object around to those objects/functions which need it.
The best pattern I know of solving this is through an options class, that gets injected into your code on creation/configuration.
Steps:
create an options parser class
configure the parser on what parameters and options it should accept, and their default values (default values can be your "most probable" defaults)
write client code to accept options as parameters (instead of singleton and/or static stuff).
inject options when creating objects.
Have a look at boost.program_options for an already mature module for program options.
If you're familiar with python, have a look at the examples in the doc of argparse (same concept, implemented in python library). They are very easy to get the concept and interactions from.

Is it better to define global (extern) variables in a single header, or in their respective header files?

I'm working on a small software project which I hope to release in the future as open-source, so I was hoping to gather opinions on what the best currently accepted practices are in regards to this issue.
The application itself is procedural, not object oriented (there is no need for me to encapsulate the rendering functions or event handling functions in a class), but some aspects of the application are heavily object oriented (like the scripting console, which heavily relies on OO). The OO aspects of the code have the standard object.cpp and object.h files.
For the procedural part, I have my code split up into various files (e.g. main.cpp, render.cpp, events.cpp), each which might have some global variables specific to that file. I also have corresponding header files for each, defining all functions and variables (as extern) that I want to be accessible from other files. Then, I just #include the right header when I need access to that function/variable from another source file.
I realized today that I could also have another option: create a single globals.h header file, where I could define all global variables (as extern again) and functions that would be needed outside of a specific source file. Then, I could just #include this file in all of the source files (instead of each individual header file like I do now). Also, using this method, if I needed to promote a variable/function to global (instead of local), I could just add the entry to the header file.
The Question: Is it a better practice to use a corresponding header file for every single .cpp file (and define the variables/functions I want globally accessible in those headers), or use a single header file to declare all globally accessible variables/functions?
Another quick update, most (but not all) of the globals are used as such because my application is multithreaded.
To me it is way better to have a header file corresponding to each implementation (c or cpp) file. You must think your classes, structures and functions as modules, and if you split your implementation, it is logical that you split your delarations too.
Another thing is that when you modify a header file, it leads all files that include it to be recompiled at build time. And at the end I can tell you it can take long. You can avoid rebuilding everything by properly splitting your declarations.
I would recommend having more headers and put less in them. You have to have the litany of includes, but that is simple to understand and edit if its wrong.
Having one big globals is harder to cope with if something goes wacky. If you did have to change something, that change is potentially far reaching and high risk.
More code isn't a bad thing in this case.
A minor point is that your compile times will increase super linearly the more you put in that one big header since each and every file has to process it. On an embedded project it is probably less of a worry, but in general having a lot in headers will start to weigh you down.
It's better to put them all in one file and not compile that file at all. If you have global variables you should be rethinking your design, especially if you're doing applications programming and not low-level systems programming.
As I've said in the comments below the question, the first thing to do would be to try and eliminate all global data. If this is not possible, rather than one big header, or throwing externs into each class' header, I'd follow a third approach.
Say your Event class needs to have a global instance. If you declare the global instance in event.cpp and extern it in event.hpp, then this essentially makes these files non-reusable anywhere else. Throwing it into a globals.cpp and globals.hpp is not ideal either because every time that global header gets modified, chances are your entire project will be rebuilt because the header is being included by everyone.
So the third option is to create an accompanying header and source file for each class that needs to have a global instance. So you'd declare the Event global instance in event_g.cpp and extern it in event_g.hpp.
Yes, it is ugly, and yes, it is tedious. But there's nothing pretty about global data to being with.

Should I put global application details in a static class?

I'm currently maintaining a legacy C++ application which has put all the global application details in a static class, some of the variables stored are:
application name
registry path
version number
company name etc..
What is the recommended method for storing and accessing system application details?
If it never changes then why not. However, if not I'd externalise it into data that's loaded at run time. This way it can change without rebuild.
And since you include version number, I'd suspect the latter is way to go.
From my C++ days I recall builds taking not inconsequential times.
I'd rather use a namespace if no instances of the class will be created.
no hurt, and I think to use a singleton is even better.