When I went to access an argument in a CFC method, I was being told this didn't exist. When I returned and output my argument struct, I found that it had been placedin another struct with a key of "1"..
For some reason, I now need to access my arguments in this CFC with arguments[1].name.
I am passing in a Form struct. If I dump this form struct before passing into the method, it is just it's own struct. If I immediately return and output the arguments, it is now nested in this new struct... but I just can't see where or why this would be happening. I am comparing it to other CFCs that I can access with just argument.???? and they look the same.
The CFC is persistent with ORM, but I have other persistent CFCs that still have arguments as normal.
Any ideas on what might be causing this woiuld be greatly appreciated.
Jason
If you are using myObject.myMethod( form ), this will exhibit the behavior you describe.
Try using
myObject.myMthod( argumentCollection = form )
Related
I am currently working on an HTTP API that I want to use to perform CRUD operations on a database. I try to write the code for it as generic and modular as possible. I am using the MySQL X DevAPI.
Currently, I am stuck on the following problem:
mysqlx::Result MySQLDatabaseHandler::jsonToCUDOperation (const nlohmann::json& json, mysqlx::Table& table, int crudEnum)
The function above takes as an argument a reference to a json object, a reference to a table object and an integer.
What I want this function to do is:
Check the integer to decide what operation to perform
Check the size of the json to know how many parameters are gonna be passed to the variadic function of the X DevAPI that is used to perform the operation.
Assemble and perform the function call
For example, assume a table "users", as well as a json object "X" with following contents:
{"id":1,"username":"test_user","email":"test#test.com","first_name":"test"}
Now, when I would call the function like this
jsonToCUDOperation(X, users, MySQLDatabaseHandler::cud::create);
I would want the function to parse the json object and call the mysqlx::Table::Insert function with parameters (and parameter count) based on the json object's keys and values, so eventually calling
users.insert("id", "username", "email", "first_name")
.values("1", "test_user", "test#test.com", "test").execute();
I first thought about achieving this behavior using a template function, but then I figured it wouldn't make sense, since the template function definitions are generated at compile time, and what I desire would require dynamic behavior at runtime. So I thought that it is not possible to design this as I intend, as it was my understanding that the behavior of a C++ function cannot change at runtime based on the parameters you pass to it. But I've figured that before I begin developing a solution which can only handle a limited json object size, I'd ask here to assure that I actually cant do what I want.
Thanks in advance for enlightening me
You can actually just pass STL containers to the CRUD functions provided by MySQL's X DevAPI
I have this view function search(request). The url suffix is /search. It takes a few POST parameters and shows search results accordingly.
I want to make a second function show_popular(request). It takes no post or get parameters. But it should emulate a call to the search function with some hard coded post parameters.
I want to achieve this without changing anything in any existing function and without changing setup. Is that possible?
EDIT: I know this can be achieved by refactoring the search into a separate function and have several view functions call this. But in this particular case, I am not interested in that. In my case the show_popular function is only temporary, and for irrelevant reasons I do not wish to re-factor.
Yes, but you don't want to do that. Refactor search() into a function that handles the request and a function that performs the search, and call the latter from show_popular().
In my company's code, I've often seen component files used by initializing an object of that component and calling the methods off the object. However, it seems to me somewhat more straightforward to use the cfinvoke method, especially when only using one method from the component file. What are the differences between these 2 methods of calling a component function and what are the pros/cons of each? When should I use which?
One other benefit of using createObject() is that you can chain the init() method, e.g.
<cfset myObject = createObject("com.path.MyObject").init() />
And if your init() returns this you can go further and chain the method if you don't need to use the object again:
<cfset functionResults = createObject("com.path.MyObject").init().myFunction() />
It's worth pointing out that in CF 9 you can use the new (ahem) new syntax to create objects. For example to create the same object as above and call it's init() I can write:
<cfset myObject = new com.path.MyObject() />
It's neat and I like the option to do this. CF is moving in the right direction in my opinion with features like this.
cfinvoke can only be used in tags.
createObject can be used in both tags & cfscript and tends to be a bit slimmer / easier to read IMO.
Until recently I avoided using cfinvoke because I found it "bulky" but a pro of it is you can dynamically loop over the methods within a CFC. In createobject you can't.
So if for example I've got a CFC which has the methods - method1, method2, method3, method4. I can loop over them like so:-
<cfloop from="1" to="4" index="element">
<cfif structKeyExists(this,'getMethod#element#')>
<cfinvoke component="#this#" method="getLine#local.element#" returnVariable="methodValue"></cfinvoke>
<cfset arrayAppend(myArray,methodValue) />
</cfif>
--
Another thing to note is that some sharing hosts lock down on createobject. Mainly because of the access it gives to the underlining Java.
You've nearly answered it yourself: on the surface, one could say that if you will be calling only one method on a page, then doing in one fell swoop in CFINVOKE (which instantiates the CFC and calls the one named method) make sense. And certainly if you would call more than one method of the CFC on a page, then separating the steps makes sense (instantiate the CFC with the createobject function or cfobject tag, then invoke methods as found in that object, a pointer to the CFC), so that you don't pay that instantiation cost more than once.
But do keep in mind that if the page is called often, it may make sense also to save that result of instantiating the CFC, so that it can be reused on a subsequent request to the page. You would do that by storing it (the result of cfobject/createobject) not in a local variable but instead in a shared scope: whether server, application, or session, based on "who" would benefit from such reuse. Of course, it's then incumbent on you to programmatically handle/decide how long to save this "cached" CFC instance.
As important, when you save a CFC instance this way, you become more susceptible to the "var scope bug", which basically is that you need to be still more careful to VAR any local variables you create in the CFC. Rather than try to elaborate more on that, I'll point out a meta-resource I created on that:
http://www.carehart.org/blog/client/index.cfm/2010/3/4/resources_on_the_var_scope_problem
Hope that helps.
Rather then rehash this discussion I'll just point you towards Google:
http://www.google.com/search?q=cfinvoke+vs+createobject
There are some subtle differences (IE: <cfinvoke> is capable of handling dynamic method names) but essentially it just boils down to personal preference. Well, that and the fact that you can't use <cfinvoke>via <cfscript>.
It there an easy way to pass all the variables a template file has access to onto a partial when I have output escaping on?
I tend to create a template file, then refactor things into a partial at some point and it would seem that there would be an easy way to just pass all the same variables from the template to the partial and be done with it.
I have output escaping on and I can't just pass in $sf_data.
It look like calling a partial from within another partial is very simple...just pass in the variable $vars.
Edit:
This is in regards to Symfony 1.2+
Which version of Symfony are using?
TIP New in symfony 1.1: Instead of
resulting in a template, an action can
return a partial or a component. The
renderPartial() and renderComponent()
methods of the action class promote
reusability of code. Besides, they
take advantage of the caching
abilities of the partials (see Chapter
12). The variables defined in the
action will be automatically passed to
the partial/component, unless you
define an associative array of
variables as a second parameter of the
method.
so if you just do not pass the second argument of include_partial(), I guess you're done...
EDIT: completely wrong. Let's see what is done in renderPartial() : there is a call to getPartial(), which does this :
$vars = null !== $vars ? $vars : $this->varHolder->getAll();
So now, you can create a variable with all variables in your action:
public function executeStackOverflow()
{
$this->testVar = 42;
$this->allVars = $this->varHolder->getAll();
}
Now you can call your partials and give them $allVars as second argument. Access granted to all variables.
I'm having a bit of difficulty passing a reference type between webservices.
My set up is as follows.
I have a console application that references two web-services:
WebServiceOne
WebServiceTwo
WebServiceOne declares the details of a class I am using in my console application...let's call it MyClass.
My console application calls WebServiceOne to retrieve a list of MyClass.
It then sends each MyClass off to WebServiceTwo for processing.
Within in the project that holds WebServiceTwo, there is a reference to WebServiceOne so that I can have the declaration of MyClass.
The trouble I'm having is that, when I compile, it can't seem to determine that the MyClass passed from the console application is the same as the MyClass declared in WebServiceOne referenced in WebServiceTwo.
I basically get an error saying Console.WebServiceOne.MyClass is not the same as MyProject.WebServiceOne.MyClass.
Does anyone know if doing this is possible? Perhaps I'm referencing WebServiceOne incorrectly? Any idea what I might be doing wrong?
My only other option is to pass each of the properties of the reference type directly to WebServiceTwo as value types...but I'd like to avoid that since I'd end up passing 10-15 parameters.
Any help would be appreciated!
I had a chat with one of the more senior guys at my work and they proposed the following solution that has worked out well for me.
The solution was to use a Data Transfer Object and remove the reference to WebServiceOne in WebServiceTwo.
Basically, in WebServiceTwo I defined a representation of all the value type fields needed as BenefitDTO. This effectively allows me to package up all the fields into one object so I don't have to pass each of them as parameters in a method.
So for the moment, that seems to be the best solution...since it works and achieves my goal.
It's likely that I didn't explain my question very well...which explains why no one was able to help...
But thanks anyway! :-)
Move the types to a separate assembly and ensure that both services use this. In the web service reference there is probably some autogenerated code called Reference.cs. Alter this to use your types.
Edit: To reflect comments
In that case take the reference.cs from that web service you cannot control use it as the shared type.
Your error message explains the problem. The proxy class on the client side is not the same type as the original class on the server side, and never will be. Whether it's a reference type or a value type is irrelevant to how it works.
I don't quite understand what your exact problem is, but here are a few guesses:
If you are trying to compare two objects for equality, then you will have to write your own compare function that compares the values of each significant property/field in turn.
If you are trying to copy an object from one service to the other, then you will have to write your own copy function that copies the values of each significant property/field in turn.
If you were using WCF, you would have the option of bypassing all this and just sharing one class definition between the client and both services.