I'm working on the new system and there are few things that are different from what I used to see. Basically there is an JQuery ajax call with "POST" type and url pointing to .cfm page. The .cfm page will return html table.
After talking to lead developer he mentioned that this method is more efficient. This way calling .cfm we do not create new instance each time we make a call. The other way if we use .cfc and call a function new instance will be created each time. I do not know everything behind the screen and deep layers of ColdFusion.
One other thing he mentioned this way it's better since we do not use any frameworks. I have been working with ColdFusion for the past 4 years and what I seen in the past is JQuery Ajax calling component.cfc with specific method name. The data is returned and table is built dynamically. I was wondering if someone knows more about this and why the .cfm might be better than calling .cfc.
Thank you.
To long for a comment
I agree with what the others have already said. There is no specific answer because it always depends on more things than just this bit of code. Having said that...
I found this from the Adobe documentation here which seems relevant. Below is an excerpt from that documentation. I highlighted in bold the part which seems relevant. Notice the part about after the CFC is instantiated. You can read more at the link.
When to use CFCs
You can use CFCs in the following ways:
Developing structured, reusable code
Creating web services
Creating Flash Remoting elements
Using asynchronous CFCs
Developing structured, reusable code
CFCs provide an excellent method for developing structured applications that separate display elements from logical elements and encapsulate database queries. You can use CFCs to create application functionality that you (and others) can reuse wherever needed, like user-defined functions (UDFs) and custom tags. If you want to modify, add, or remove component functionality, you make changes in only one component file.
CFCs have several advantages over UDFs and custom tags. These advantages, which CFCs automatically provide, include all of the following:
The ability to group related methods into a single component, and to group related components into a package
Properties that multiple methods can share
The This scope, a component-specific scope
Inheritance of component methods and properties from a base component, including the use of the Super keyword
Access control
Introspection for CFC methods, properties, and metadata
CFCs have one characteristic that prevents them from being the automatic choice for all code reuse. It takes relatively more processing time to instantiate a CFC than to process a custom tag. In turn, it takes substantially more time to process a custom tag than to execute a user-defined function (UDF). However, after a CFC is instantiated, calling a CFC method has about the same processing overhead as an equivalent UDF. As a result, do not use CFCs in place of independent, single-purpose custom tags or UDFs. Instead, use CFCs to create bodies of related methods, particularly methods that share properties.
For more information about UDFs, custom tags, and other ColdFusion code reuse techniques, see Creating ColdFusion Elements.
Creating web services
ColdFusion can automatically publish CFC methods as web services. To publish a CFC method as a web service, you specify the access="remote" attribute in the method's cffunction tag. ColdFusion generates all the required Web Services Description Language (WSDL) code and exports the CFC methods. For more information on creating web services in ColdFusion, see Using Web Services.
Now I don't always trust the Adobe documentation as they have a nasty habit of just carrying forward the existing documentation from version to version. So who knows when this was originally written and if it is still true. And this is specific to Adobe's ColdFusion, Lucee is likely better at handling this but I'm not sure.
That document also refers to this document - Selecting among ColdFusion code reuse methods. I will include that info here as well.
The following table lists common reasons to employ code reuse methods and indicates the techniques to consider for each purpose. The letter P indicates that the method is preferred. (There can be more than one preferred method.) The letter A means that the method provides an alternative that is useful in some circumstances.
This table does not include CFX tags. You use CFX tags only when it is best to code your functionality in C++ or Java. For more information about using CFX tags, see Using CFX tags.
Related
TL;DR: I'd like to have a tool that receives an RESTful schema as input and provides a pyqt dialog/UI as output. Preferably with automatic submission/validation.
I'm working on a PyQt5 application that interacts with a remote Django server using django-rest-framework.
I find that I define most of my models/views/serializers quite quickly as they neatly extend one another. After writing a proper model, generating serializer and view is very easy and I end up with fully functioning server-side fast.
The client/GUI side is a different matter. I have to define the available fields again, their type and order. I have to define widgets for viewing a single object and a list of objects. I have to define edit interfaces and handle permissions.
This all seems like it could use some sort of automation.
Ideally, I could point a smart widget or form to a REST endpoint, and it'll automatically fetch the schema and allowed actions. Automatically create a GUI and the necessary error handling.
Ideally, this shouldn't depend on server side technology, and simply use a schema.
I've googled and couldn't find anything like that. Can someone point me at something similar? Is there a deeper issue with creating such a tool I'm missing?
Thanks!
I have JSP project which uses Liferay framework. There are default Liferay cookies named COOKIE_SUPPORT and GUEST_LANGUAGE_ID in Liferay. I dont want hackers to view any of my technology information by any means. How can I rename these cookie?
If you want to protect the framework you're using, you won't have to worry about the names of the cookies. Worry about server identification, elements of the DOM, structure and mechanics of URLs, secure&hardened setup of your server, common translations, default content, standard error messages, etc.
In other words: If you don't want to give away, which standard framework you're using (and this is not limited to Liferay) you'll have to roll your own. Good luck with getting this as powerful and as well tested as any standard framework.
Rather worry about keeping your systems updated all the time and protect from well known vulnerabilities in older systems. For hardening Liferay specifically, you might want to start with my blog series on securing Liferay (linking chapter 1 which refers to the other chapters)
Promoting a comment into this answer: One way to find out how to change them is to search for their names in the source code and identify the kind of plugin you need to provide different values - most likely this will be an ext-plugin. After all, Liferay's source is available. I don't see anything short of this.
Do you fire ajax requests through the MVC framework of choice, or directly to the CFC?
I'm leaning towards bypassing the MVC, since I need no 'View' from the ajax request.
What are the pro's of routing ajax calls through MVC framework, like Coldbox?
update: found this page http://ortus.svnrepository.com/coldbox/trac.cgi/wiki/cbAjaxHints but I am still trying the wrap my mind around what benefits it brings over the complexity it introduces...
Henry, I make my Ajax requests to proxy objects of my model. Typically, I am outside of a 'framework' when doing so. That being said, it may be (very) necessary to utilize your framework, such as working within a set security model.
I can't really see any benefit of bypassing the MVC framework - in combination, those three elements are your application.
Your ajax elements are really part of the view. As Luca says, the view outputs the results of the model and controller.
Look at it this way - if you made an iPhone-friendly web interface (that is, a new View), would you bypass the model and controller?
Luis Majano, creator of ColdBox said:
These are the two schools of ajax
interaction henry.
I prefer the proxy approach because it
adds the following:
Debugging
Tracing in the debugger
AOP interception points
Security
Setting availability
The proxy will relay to the event model, so I can use local interception
points, local AOP, plugins, etc.
In other words, it can be a highly
monitored call instead of a simple
service cfc call, which you can still
do.
I, for one, love to have my execution
profiler running (part of the coldbox
debugger), so I can see when ajax
requests come in and when they come
out. I can see the data requested and
the data sent back. I don't have to
look in log files, or try to imagine
results or problems. It really helps
out in debugging.
However, it would be a developer
choice in which way you decide to go.
My personal preference is to always
use my proxy to event delegation
because it gives me much more
flexibility, debugging and peace of
mind.
The purpose of the "view" in MVC frameworks is to show the data after the "model" and "controller" have generated it. If you don't need the "view", then what's the point of using such a design pattern?
I agree with Luca. It also bypasses any kind of sanitization and filtering logic you have in your MC stack. It basically negates any kind of query processing that you may or may not have in place.
Yeah, I wouldn't bypass your framework, figure out what's causing you grief and hunt down the offending pieces, adding logic to exclude common components such as headers or footers, and looking for methods injecting whitespace that while fine for html is annoying or down right problematic when parsing json.
Adding output="false" especially in your application.cfc and it's methods would be the first thing I cleaned up.
I am a strong believer in NEVER directly accessing the CFC's directly, I find it creates long term problems when a major refactor might want to consolidate or eliminate components, the direct accesses potentially make this harder than it should be, especially if a third party is hitting your ajax from another domain(e.g. flash remoting).
+1 to Steve's answer.
In the project there are two data sources: one is project's own database, another is (semi-)legacy web service. The problem is that admin part has to keep them in sync and manage both so that user doesn't have to know they're separate (or, do know, but they do not care).
Here's an example: there's list of languages. Both apps - project and legacy - need to use them. However, they both add their own meaning. For example, project may need active/inactive, and legacy will need language code.
But admin part has to manage everything - language name, active/inactive, language code. When loading, data from both systems has to be merged and presented, and when saved, data has to be updated in both systems.
Thus, what's the best way to represent this separated data (to be used in the admin page)? Notice that I use ASP.NET MVC / NHibernate.
How do I manage legacy data?
Do I connect admin part to legacy web service external interface - where it currently only has GetXXX() methods - and add the missed C[R]UD methods?
Or, do I connect directly to legacy database - which is possible since I do control it.
Where do I do split/merge of data - in the controller/service layer, or in the repository/data layer?
In the controller layer I'll do "var viewmodel = new ViewModel { MyData = ..., LegacyData = ... }; The problem - code cluttered with legacy issues.
In the data layer, I'll do "var model = repository.Get(id)" and model will contain data from both worlds, and when I do "repository.Save(entity)" it will update both data sources - in local db only project specific fields will be stored. The problems: a) possible leaky abstraction b) getting data from web service always while it is only need sometimes and usually for admin part only
a modification, use ICombinedRepository<Language> which will provide additional split/merge. Problems: still need either new model or IWithLegacy<Language, LegacyLanguage>...
Have a single "sync" method; this will remove legacy items not present in the project item list, update those that are present, create legacy items that are missed, etc...
Well, to summarize the main issues:
do I develop CRUD interface on web service or connect directly to its database (which is under my complete control, so that I may even later decide to move that web service part into the main app or make it use the main db)?
do I have separate classes for project's and legacy entities, thus managed separately, or have project's entities have all the legacy fields, managed transparently when saved/loaded?
Anyway, are there any useful tips on managing mostly duplicated data from different sources? What are the best practices?
In the non-admin part, I'd like to completely hide the notion of the legacy data. Which is what I do now, behind the repository interfaces. But for admin part it's not that clear or easy...
What you are describing here seems to warrant the need for an Anti-Corruption Layer. You can find solutions related to this topic here: DDD, Anti Corruption layer, how-to?
When you have two conceptual Bounded Contexts, but you're only using DDD for one of them, the Anti-Corruption layer comes into play. When reading from your data source (performing a get operation [R]), the anti-corruption layer will translate your legacy data into usable objects for your project. When writing to your data source (performing a set operation [CUD]), the anti-corruption layer will translate your DDD objects into objects understood by your legacy code.
Whether or not to use the existing Web Service depends on whether or not you're willing to change existing code. Sticking with DRY practices, you don't want to duplicate what you already have. If you want to keep the Web Service, you can add CUD methods inside the anti-corruption layer without impacting your legacy application.
In the anti-corruption layer, you will want to make use of adapters and facades to bring together separate classes for your DDD project and the legacy application.
The anti-corruption layer is exactly where you handle splitting and merging.
Let me know if you have any questions on this, as it can be a somewhat advanced topic. I'll try to answer as best I can.
Good luck!
The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.