Is isDefined() being deprecated and did isNull() take its place? - coldfusion

I am using ColdFusion 9.0.1
I recently read that in ColdFusion 9, it is now advised to use isNull() and not use isDefined().
I didn't find much info on this at all around the web.
Is there any advantage to using one or the other in ColdFusion 9?

No, isDefined() is not deprecated and won't likely be deprecated.
IsNull() is for, as Shawn says, for working with NULLs returned from Java, most specifically for those returned from Hibernate (ORM integration).
StructKeyExist() is more precise than isDefined("") but there is, technically, nothing wrong with using isDefined() and I would question whether or not structKeyExist() would work in every situation. Is every variable within a struct or some sort? I am not sure.
I do not think you need to worry about isDefined() going away anytime soon.

I believe the real reason isNull() was added was to provide a more concrete way to test for Java-related NULLs that are returned from objects, services, etc--whereas isDefined() tests to see whether a variable exists or not.
Two separate functions, really.

I know that in general people have been moving away from it for some time. The main use for it was to determine whether a variable existed in a particular scope, but structKeyExists() is more precise for that. IsDefined() works very hard to find any possible instance of the variable you've asked for. (Sean Corfield had a bit to say about structKeyExists vs. isDefined.)
I've not seen anywhere a recommendation to use isNull() instead of isDefined. Actually, I would have expected isNull to return an error if you give it an undefined variable, but apparently that works.
How do you do dynamic variables, though?
structKeyExists(form,"address_" & i)
You can try array notation...
isNull(form["address_" & i])
...but if i is undefined this throws an error.

Regarding isDefined(varName) my CLIENT.somevarname variable has a null value, and when I try to evaluate it
<cfif isDefined("somevarname") and val(somevarname)> </cfif>
.. it throws an error :
Element somevarnameis undefined in CLIENT
I use to handle it like this to prevent an error when conditionally comparing the value:
len(trim(CLIENT.somevarname)) eq 0
I switched to using isNull() and it works better.

Related

Pass a C++ method return to a Pro*C procedure

I am aware that in Pro*C is possible to pass to a procedure either a host variable or a String literal:
dbms_pipe.purge(:pipe_name);
dbms_pipe.purge('pipe_name');
Is it possible to pass a method return to a Pro*C procedure? The following call don't work:
dbms_pipe.purge(pipe_name.c_str());
Late answer, but still:
First of all, Pro*C is quite dumb. It becomes even more dumb when switching from C to C++-Mode.
Your second example does not challenge Pro*C at all, because the string constant is just part of your sql-statement.
Your first example is just what it can do. You cannot access members of structs (But you can read in whole structs), call functions or whatever. The only way to deal with this is first to copy the result of the function call into a host-variable and then pass that to Pro*C. To find the Manual, try google search for "oracle pro*c developer guide". If you read it carefully, you will understand what you can do and what not...

Better handling of missing/wrong key in boost::program_options

Is there a way to know which key was involved when a call like the following fails ?
boost::program_options::variables_map vm;
...
int foo_bar = vm["some_key"].as<int>();
If the key is missing from the map, or is not convertible to int, I get a rather uninformative bad_any_cast, and I can't know any of the following:
the key involved
the stored value, or even if it's there.
the types involved
I can't find of any solution that doesn't involve either modifying the boost header or wrapping every call to the above in a try..catch block.
I think it's a common issue, so maybe someone else knows a better approach.
Marco,
there's no way to get better diagnostics without modifying the library.
However, please note that in general, I am not sure exceptions in this case are supposed to be very detailed:
- If you use wrong type to access variable, you've got a coding error. You can easily track that down with a debugger
- If you access a variable that does not exist, you either need to if vm.count, or use default value. Again, it's probably a coding error best solved using a debugger.
I agree that bad_any_cast is something that can be improved, but it does not seem that exception that can be reported to user should be a goal here, where exceptions are result of coding error.

Which school of reporting function failures is better

Very often you have a function, which for given arguments can't generate valid result or it can't perform some tasks. Apart from exceptions, which are not so commonly used in C/C++ world, there are basically two schools of reporting invalid results.
First approach mixes valid returns with a value which does not belong to codomain of a function (very often -1) and indicates an error
int foo(int arg) {
if (everything fine)
return some_value;
return -1; //on failure
}
The scond approach is to return a function status and pass the result within a reference
bool foo(int arg, int & result) {
if (everything fine) {
result = some_value;
return true;
}
return false; //on failure
}
Which way do you prefer and why. Does additional parameter in the second method bring notable performance overhead?
Don't ignore exceptions, for exceptional and unexpected errors.
However, just answering your points, the question is ultimately subjective. The key issue is to consider what will be easier for your consumers to work with, whilst quietly nudging them to remember to check error conditions. In my opinion, this is nearly always the "Return a status code, and put the value in a separate reference", but this is entirely one mans personal view. My arguments for doing this...
If you choose to return a mixed value, then you've overloaded the concept of return to mean "Either a useful value or an error code". Overloading a single semantic concept can lead to confusion as to the right thing to do with it.
You often cannot easily find values in the function's codomain to co-opt as error codes, and so need to mix and match the two styles of error reporting within a single API.
There's almost no chance that, if they forget to check the error status, they'll use an error code as if it were actually a useful result. One can return an error code, and stick some null like concept in the return reference that will explode easily when used. If one uses the error/value mixed return model, it's very easy to pass it into another function in which the error part of the co-domain is valid input (but meaningless in the context).
Arguments for returning the mixed error code/value model might be simplicity - no extra variables floating around, for one. But to me, the dangers are worse than the limited gains - one can easily forget to check the error codes. This is one argument for exceptions - you literally can't forget to handle them (your program will flame out if you don't).
boost optional is a brilliant technique. An example will assist.
Say you have a function that returns an double and you want to signify
an error when that cannot be calculated.
double divide(double a, double b){
return a / b;
}
what to do in the case where b is 0;
boost::optional<double> divide(double a, double b){
if ( b != 0){
return a / b;
}else{
return boost::none;
}
}
use it like below.
boost::optional<double> v = divide(a, b);
if(v){
// Note the dereference operator
cout << *v << endl;
}else{
cout << "divide by zero" << endl;
}
The idea of special return values completely falls apart when you start using templates. Consider:
template <typename T>
T f( const T & t ) {
if ( SomeFunc( t ) ) {
return t;
}
else { // error path
return ???; // what can we return?
}
}
There is no obvious special value we can return in this case, so throwing an exception is really the only way. Returning boolean types which must be checked and passing the really interesting values back by reference leads to an horrendous coding style..
Quite a few books, etc., strongly advise the second, so you're not mixing roles and forcing the return value to carry two entirely unrelated pieces of information.
While I sympathize with that notion, I find that the first typically works out better in practice. For one obvious point, in the first case you can chain the assignment to an arbitrary number of recipients, but in the second if you need/want to assign the result to more than one recipient, you have to do the call, then separately do a second assignment. I.e.,
account1.rate = account2.rate = current_rate();
vs.:
set_current_rate(account1.rate);
account2.rate = account1.rate;
or:
set_current_rate(account1.rate);
set_current_rate(account2.rate);
The proof of the pudding is in the eating thereof. Microsoft's COM functions (for one example) chose the latter form exclusively. IMO, it is due largely to this decision alone that essentially all code that uses the native COM API directly is ugly and nearly unreadable. The concepts involved aren't particularly difficult, but the style of the interface turns what should be simple code into an almost unreadable mess in virtually every case.
Exception handling is usually a better way to handle things than either one though. It has three specific effects, all of which are very good. First, it keeps the mainstream logic from being polluted with error handling, so the real intent of the code is much more clear. Second, it decouples error handling from error detection. Code that detects a problem is often in a poor position to handle that error very well. Third, unlike either form of returning an error, it is essentially impossible to simply ignore an exception being thrown. With return codes, there's a nearly constant temptation (to which programmers succumb all too often) to simply assume success, and make no attempt at even catching a problem -- especially since the programmer doesn't really know how to handle the error at that part of the code anyway, and is well aware that even if he catches it and returns an error code from his function, chances are good that it will be ignored anyway.
In C, one of the more common techniques I have seen is that a function returns zero on success, non-zero (typically an error code) on error. If the function needs to pass data back to the caller, it does so through a pointer passed as a function argument. This can also make functions that return multiple pieces of data back to the user more straightforward to use (vs. return some data through a return value and some through a pointer).
Another C technique I see is to return 0 on success and on error, -1 is returned and errno is set to indicate the error.
The techniques you presented each have pros and cons, so deciding which one is "best" will always be (at least partially) subjective. However, I can say this without reservations: the technique that is best is the technique that is consistent throughout your entire program. Using different styles of error reporting code in different parts of a program can quickly become a maintenance and debugging nightmare.
There shouldn't be much, if any, performance difference between the two. The choice depends on the particular use. You cannot use the first if there is no appropriate invalid value.
If using C++, there are many more possibilities than these two, including exceptions and using something like boost::optional as a return value.
C traditionally used the first approach of coding magic values in valid results - which is why you get fun stuff like strcmp() returning false (=0) on a match.
Newer safe versions of a lot of the standard library functions use the second approach - explicitly returning a status.
And no exceptions aren't an alternative here. Exceptions are for exceptional circumstances which the code might not be able to deal with - you don't raise an exception for a string not matching in strcmp()
It's not always possible, but regardless of which error reporting method you use, the best practice is to, whenever possible, design a function so that it does not have failure cases, and when that's not possible, minimize the possible error conditions. Some examples:
Instead of passing a filename deep down through many function calls, you could design your program so that the caller opens the file and passes the FILE * or file descriptor. This eliminates checks for "failed to open file" and report it to the caller at each step.
If there's an inexpensive way to check (or find an upper bound) for the amount of memory a function will need to allocate for the data structures it will build and return, provide a function to return that amount and have the caller allocate the memory. In some cases this may allow the caller to simply use the stack, greatly reducing memory fragmentation and avoiding locks in malloc.
When a function is performing a task for which your implementation may require large working space, ask if there's an alternate (possibly slower) algorithm with O(1) space requirements. If performance is non-critical, simply use the O(1) space algorithm. Otherwise, implement a fallback case to use it if allocation fails.
These are just a few ideas, but applying the same sort of principle all over can really reduce the number of error conditions you have to deal with and propagate up through multiple call levels.
For C++ I favour a templated solution that prevents the fugliness of out parameters and the fugliness of "magic numbers" in combined answers/return codes. I've expounded upon this while answering another question. Take a look.
For C, I find the fugly out parameters less offensive than fugly "magic numbers".
You missed a method: Returning a failure indication and requiring an additional call to get the details of the error.
There's a lot to be said for this.
Example:
int count;
if (!TryParse("12x3", &count))
DisplayError(GetLastError());
edit
This answer has generated quite a bit of controversy and downvoting. To be frank, I am entirely unconvinced by the dissenting arguments. Separating whether a call succeeded from why it failed has proven to be a really good idea. Combining the two forces you into the following pattern:
HKEY key;
long errcode = RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key);
if (errcode != ERROR_SUCCESS)
return DisplayError(errcode);
Contrast this with:
HKEY key;
if (!RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key))
return DisplayError(GetLastError());
(The GetLastError version is consistent with how the Windows API generally works, but the version that returns the code directly is how it actually works, due to the registry API not following that standard.)
In any case, I would suggest that the error-returning pattern makes it all too easy to forget about why the function failed, leading to code such as:
HKEY key;
if (RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key) != ERROR_SUCCESS)
return DisplayGenericError();
edit
Looking at R.'s request, I've found a scenario where it can actually be satisfied.
For a general-purpose C-style API, such as the Windows SDK functions I've used in my examples, there is no non-global context for error codes to rest in. Instead, we have no good alternative to using a global TLV that can be checked after failure.
However, if we expand the topic to include methods on a class, the situation is different. It's perfectly reasonable, given a variable reg that is an instance of the RegistryKey class, for a call to reg.Open to return false, requiring us to then call reg.ErrorCode to retrieve the details.
I believe this satisfies R.'s request that the error code be part of a context, since the instance provides the context. If, instead of a RegistryKey instance, we called a static Open method on RegistryKeyHelper, then the retrieval of the error code on failure would likewise have to be static, which means it would have to be a TLV, albeit not an entirely global one. The class, as opposed to an instance, would be the context.
In both of these cases, object orientation provides a natural context for storing error codes. Having said that, if there is no natural context, I would still insist on a global, as opposed to trying to force the caller to pass in an output parameter or some other artificial context, or returning the error code directly.
I think there is no right answer to this. It depends on your needs, on the overall application design etc. I personally use the first approach though.
I think a good compiler would generate almost the same code, with the same speed. It's a personal preference. I would go on first.
If you have references and the bool type, you must be using C++. In which case, throw an exception. That's what they're for. For a general desktop environment, there's no reason to use error codes. I have seen arguments against exceptions in some environments, like dodgy language/process interop or tight embedded environment. Assuming neither of those, always, always throw an exception.
Well, the first one will compile either in C and C++, so to do portable code it's fine.
The second one, although it's more "human readable" you never know truthfully which value is the program returning, specifying it like in the first case gives you more control, that's what I think.
I prefer using return code for the type of error occured. This helps the caller of the API to take appropriate error handling steps.
Consider GLIB APIs which most often return the error code and the error message along with the boolean return value.
Thus when you get a negative return to a function call, you can check the context from the GError variable.
A failure in the second approach specified by you will not help the caller to take correct actions. Its different case when your documentation is very clear. But in other cases it will be a headache to find how to use the API call.
For a "try" function, where some "normal" type of failure is reasonably expected, how about accepting either a default return value or a pointer to a function which accepts certain parameters related to the failure and returns such a value of the expected type?
Apart from doing it the correct way, which of these two stupid ways do you prefer?
I prefer to use exceptions when I'm using C++ and need to throw an error, and in general, when I don't want to force all calling functions to detect and handle the error. I prefer to use stupid special values when there is only one possible error condition, and that condition means there is no way the caller can proceed, and every conceivable caller will be able to handle it.. which is rare. I prefer to use stupid out parameters when modifying old code and for some reason I can change the number of parameters but not change the return type or identify a special value or throw an exception, which so far has been never.
Does additional parameter in the
second method bring notable
performance overhead?
Yes! Additional parameters cause your 'puter to slow down by at least 0 nanoseconds. Best to use the "no-overhead" keyword on that parameter. It's a GCC extension __attribute__((no-overhead)), so YMMV.

Issue with using std::copy

I am getting warning when using the std copy function.
I have a byte array that I declare.
byte *tstArray = new byte[length];
Then I have a couple other byte arrays that are declared and initialized with some hex values that i would like to use depending on some initial user input.
I have a series of if statements that I use to basically parse out the original input, and based on some string, I choose which byte array to use and in doing so copy the results to the original tstArray.
For example:
if(substr1 == "15")
{
std::cout<<"Using byte array rated 15"<<std::endl;
std::copy(ratedArray15,ratedArray15+length,tstArray);
}
The warning i get is
warning C4996: 'std::copy': Function call with parameters
that may be unsafe
- this call relies on the caller to check that the passed
values are correct.
A possible solution is to to disable this warning is by useing -D_SCL_SECURE_NO_WARNINGS, I think. Well, that is what I am researching.
But, I am not sure if this means that my code is really unsafe and I actually needed to do some checking?
C4996 means you're using a function that was marked as __declspec(deprecated). Probably using D_SCL_SECURE_NO_WARNINGS will just #ifdef out the deprecation. You could go read the header file to know for sure.
But the question is why is it deprecated? MSDN doesn't seem to say anything about it on the std::copy() page, but I may be looking at the wrong one. Typically this was done for all "unsafe string manipulation functions" during the great security push of XPSP2. Since you aren't passing the length of your destination buffer to std::copy, if you try to write too much data to it it will happily write past the end of the buffer.
To say whether or not your usage is unsafe would require us to review your entire code. Usually there is a safer version they recommend when they deprecate a function in this manner. You could just copy the strings in some other way. This article seems to go in depth. They seem to imply you should be using a std::checked_array_iterator instead of a regular OutputIterator.
Something like:
stdext::checked_array_iterator<char *> chkd_test_array(tstArray, length);
std::copy(ratedArray15, ratedArray15+length, chkd_test_array);
(If I understand your code right.)
Basically, what this warning tells you is that you have to be absolutely sure that tstArray points to an array that is large enough to hold "length" elements, as std::copy does not check that.
Well, I assume Microsoft's unilateral deprecation of the stdlib also includes passing char* to std::copy. (They've messed with a whole range of functions actually.)
I suppose parts of it has some merit (fopen() touches global ERRNO, so it's not thread-safe) but other decisions do not seem very rational. (I'd say they took a too big swathe at the whole thing. There should be levels, such as non-threadsafe, non-checkable, etc)
I'd recommend reading the MS-doc on each function if you want to know the issues about each case though, it's pretty well documented why each function has that warning, and the cause is usually different in each case.
At least it seems that VC++ 2010 RC does not emit that warning at the default warning level.

isDefined function?

In C++ is there any function that returns "true" when the variable is defined or false in vice versa. Something like this:
bool isDefined(string varName)
{
if (a variable called "varName" is defined)
return true;
else
return false;
}
C++ is not a dynamic language. Which means, that the answer is no. You know this at compile time, not runtime.
There is no such a thing in runtime as it doesn't make sense in a non-dynamic language as C++.
However you can use it inside a sizeof to test if it exists on compile time without side-effects.
(void)sizeof(variable);
That will stop compilation if var doesn't exist.
As already stated, the C++ runtime system does not support the querying of whether or not a variable is declared or not. In general a C++ binary doesn't contain information on variable symbols or their mappings to their location. Technically, this information would be available in a binary compiled with debugging information, and you could certainly query the debugging information to see if a variable name is present at a given location in code, but it would be a dirty hack at best (If you're curious to see what it might look at, I posted a terrible snippet # Call a function named in a string variable in C which calls a C function by a string using the DWARF debugging information. Doing something like this is not advised)
Microsoft has two extensions to C++ named: __if_exists and __if_not_exists. They can be useful in some cases, but they don't take string arguments.
If you really need such a functionality you can add all your variables to a set and then query that set for variable existance.
Already mentioned that C++ doesn't provide such facility.
On the other hand there are cases where the OS implement mechanisms close to isDefined(),
like the GetProcAddress Function, on Windows.
No. It's not like you have a runtime system around C++ which keeps remembers variables with names in some sort of table (meta data) and lets you access a variable through a dynamically generated string. If you want this, you have to build it yourself, for example using a std::map that maps strings to some objects.
Some compile-time mechanism would fit into the language. But I don't think that it would be any useful.
In order to achieve this first you need to implement a dynamic variable handling system, or at least find some on the internet. As previously mentioned the C++ is designed to be a native language so there are no built-in facilities to do this.
What I can suggest for the most easy solution to create a std::map with string keys storing global variables of interest with a boost::any, wxVariant or something similar, and store your variables in this map. You can make your life a bit easier with a little preprocessor directive to define a variables by their name, so you don't need to retype the name of the variable twice. Also, to make life easier I suggest to create a little inline function which access this variable map, and checks if the given string key is contained by the map.
There are implementation such a functionality in many places, the runtime property handling systems are available in different fashion, but if you need just this functionality I suggest to implement by yourself, because most of these solutions are quite general what you probably don't need.
You can make such function, but it wouldn't operate strings. You would have to send variable name. Such a function would try to add 0 to the variable. If it doesn't exists, an error would occur, so you might want to try to make exception handling with try...throw...catch . But because I'm on the phone, I don't know if this wouldn't throw an error anyways when trying to send non-existing variable to the function...