error_code: how to set and check errno - c++

I'm trying to understand what category I should use, when calling a C function that sets errno on Linux.
I'm not sure all the possible error codes are defined by POSIX, so I'm tempted to use the system_category.
But I like to handle generic condition in my code later on, so I would like to do things like that:
std::error_code ec;
some_func(some_path, ec);
if (ec) {
if (ec == std::errc::file_exists) {
// special handling
}
return ec;
}
To set the error code in some_func(), I expected to proceed like this:
ec.assign(EEXIST, std::system_category());
Mostly based on this discussion:
<system_error> categories and standard/system error codes
And the code sample provided by #niall-douglas:
std::error_code ec;
if(-1 == open(...))
ec = std::error_code(errno, std::system_category());
// To test using portable code
if(ec == std::errc::no_such_file_or_directory)
...
// To convert into nearest portable error condition (lossy, may fail)
std::error_condition ec2(ec.default_error_condition())
-- https://stackoverflow.com/a/40063005/951426
However, on Linux, with GCC 6.1.1, I have:
std::error_code(EEXIST, std::system_category()) == std::errc::file_exists returns false
std::error_code(EEXIST, std::generic_category()) == std::errc::file_exists returns true
I was expecting the errno + system_category to be comparable with std::errc conditions.
This means my initial code that checks if (ec == std::errc::file_exists) does not work if I don't use the generic category.
Is this the expected behavior?

This is a bug recently fixed in latest GCC 6, 7 and 8 point releases. It'll work as you expect if you're on the latest point release. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60555.

Related

Sending a large text via Boost ASIO

I am trying to send a very large string to one of my clients. I am mostly following code in HTTP server example: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/examples/cpp11_examples.html
Write callbacks return with error code 14, that probably means EFAULT, "bad address" according to this link:
https://mariadb.com/kb/en/operating-system-error-codes/
Note that I could not use message() member function of error_code to read error message, that was causing segmentation fault. (I am using Boost 1.53, and the error might be due to this: https://github.com/boostorg/system/issues/50)
When I try to send small strings, let's say of size 10 for example, write callback does not return with an error.
Here is how I am using async_write:
void Connection::do_write(const std::string& write_buffer)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(write_buffer, write_buffer.size()),
[this, self, write_buffer](boost::system::error_code ec, std::size_t transfer_size)
{
if (!ec)
{
} else {
// code enters here **when** I am sending a large text.
// transfer_size always prints 65535
}
});
}
Here is how I am using async_read_some:
void Connection::do_read()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(buffer_),
[this, self](boost::system::error_code ec, std::size_t bytes_transferred)
{
if (!ec)
{
do_write(VERY_LARGE_STRING);
do_read();
} else if (ec != boost::asio::error::operation_aborted) {
connection_manager_.stop(shared_from_this());
}
});
}
What could be causing write callback to return with error with large string?
The segfault indicates likely Undefined Behaviour to me.
Of course there's to little code to tell, but one strong smell is from you using a reference to a non-member as the buffer:
boost::asio::buffer(write_buffer, write_buffer.size())
Besides that could simply be spelled boost::asio::buffer(writer_buffer), there's not much hope that write_buffer stays around for the duration of the asynchronous operation that depends on it.
As the documentation states:
Although the buffers object may be copied as necessary, ownership of the underlying memory blocks is retained by the caller, which must guarantee that they remain valid until the handler is called.
I would check that you're doing that correctly.
Another potential cause for UB is when you cause overlapping writes on the same socket/stream object:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
If you checked both these causes of concern and find that something must be wrong, please post a new question including a fully selfcontained example (SSCCE or MCVE)

How to delegate an action to function return?

The problem
I have the following simple situation popping up all over the place. A large number of requests come to the device with a function signature like this:
Err execute( const ICommandContext &context,
const RoutineArguments &arguments,
RoutineResults &results)
There is essentially a request handling server that will call this execute the function for a variety of request types that have these signatures. We have 2 return paths in the case of an error.
The Err output type (consider it to be equivalent to an int) which is used to inform the server or system that something has gone wrong that is to do with the system, not the request. This is always sorted at the top of the function before the user request is dealt with.
RoutineResults provides a setStatus function that can be used to return failure information of the request to the client.
For this reason we have a lot of this type of code popping up:
// Failure due to request
Err error = someFunctionCall(clientInput);
if (!error.success()) {
results.setStatus(error); // Inform the client of the error
return SUCCESS; // Inform the system that we are all good
}
We have a particular request type that has around 15 parameters that come in and are sent off around the system. We would conceptually need 15 of this if error do set which seems wasteful. It is also prone to errors if we need to go through and change anything about how we return. How can we effectively delegate the setStatus and return to a short amount of code that only needs to happen once in the function?
A Macro Solution
A c system might solve this with a macro, something like:
#define M_InitTry Err error
#define M_Try(statement) if (!(error = statement).success()) { goto catch_lab; }
#define M_Catch catch_lab: if (!error.successs())
#define M_Return return error
Which would be used like this:
Err execute( const ICommandContext &context, ...) {
M_InitTry;
...
M_Try(someFunctionCall(clientInput));
M_Try(someFunctionCall(otherClientInput));
...
M_Catch {
// Other specific actions for dealing with the return.
results.setStatus(error);
error = SUCCESS;
}
M_Return;
}
This cleans the code nicely, but is not particularly nice with the goto. It will cause problems if defining variables that might be skipped by a goto.
A delegating solution
I was trying to think of a more C++ so I thought an RAII type delegate might help. Something like:
class DelegateToFunctionEnd {
typedef std::function<void(void)> EndFunction;
public:
DelegateToFunctionEnd(EndFunction endFunction) : callAtEnd(endFunction) { }
~DelegateToFunctionEnd() {
callAtEnd();
}
private:
EndFunction callAtEnd;
};
Pretty simple, it does a delegate of the action until the function return by implementing the action in the destructor. You might use it like this:
Err execute( const ICommandContext &context, ...) {
Err error;
DelegateToFunctionEnd del(std::bind(&RoutineResults::setStatus, &results, std::cref(error)));
error = someFunctionCall(clientInput));
if (error) return SUCCESS;
...
}
Live example.
This solution seems ok, but has several problems:
It is not as clear what is happening.
It is easier to make a mistake about setting error correctly.
You still need a large number of if statements to deal with the returns.
The ability to configure the terminating action is not great.
Dangerous if the user doesn't carefully consider the destruction order of items at function return.
A better solution?
This must be a problem that comes up often. Is there a general solution that provides a clean delegation of this set and returns type action?
I have some unfortunate restrictions below. Don't let these stop you from answering because it might be helpful for future people.
I am working on a c++03 limited system. We have boost, but no c++11.
Embedded system and we have silly rules about exceptions and memory allocation.
If error status codes are proving troublesome, you should consider using exceptions instead. That is, change the API of your functions
so they are guaranteed to have success as a post-condition
throw a suitable std::exception in the event of failure
It is impossible to "forget" to examine a status code if you do this. If you choose not to handle an error condition, the exception thrown by low-level code automatically percolates upwards. You only need to catch a low-level exception if
You need to do some manual roll-back or deallocation in the event of an error,
and RAII is impractical. In this case you would rethrow the expcetion.
You want to translate a low-level exception message or exception type into a high-level message, using a thrown) nested exception.
Maybe, you can write your statement as array, something like:
Err execute( const ICommandContext &context, ...)
{
const boost::function<Err()> functions[] = {
boost::bind(&someFunctionCall, std::ref(clientInput)),
boost::bind(&someFunctionCall, std::ref(otherClientInput)),
// ...
};
for (std::size_t i = 0; i != sizeof(functions) / sizeof(functions[0]); ++i) {
Err err = functions[i]();
if (!err.successs()) {
results.setStatus(err);
return SUCCESS;
}
}
return SUCCESS;
}
and if you do that several time with only different statements,
you might create
Err execute_functions(const ICommandContext &context, std::function<Err()> functions);
Maybe also provide other entry points as OnError depending of your needs.
Split the function.
The inner function returns an error code based on user input; the outer translates that to a client error, and only returns server side errors.
Inner function contains:
if(Err error = someFunctionCall(clientInput))
return error;
repeatedly. Outer has the relay to client error code, but only once.
Err just needs an operator bool. If it cannot have it, create a type that converts to/from Err and has an operator bool.
Can you add a method to error that does the check etc and return a bool.
if(!someFunctionCall(clientInput).handleSuccess(results))
{
return SUCCESS;
}

How to get the exit code of a Boost process?

I wondering, how to get the exit code of my child process. The function exit_code() always return 0, doesn't matter terminated (send SIGKILL) or correctly finished.
I am using boost ver 1.65 and C++0x. I cant change these settings.
As I read in the documentation:
int exit_code() const;
Get the exit_code. The return value is without any meaning if the child wasn't waited for or if it was terminated.
So this function is not helping me, but I may could use the error code.
std::error_code ec;
bp::system("g++ main.cpp", ec);
But std::error_code is only supported since c++11. I tried boost::system::error_code, but thats not correct.
Here the link to Boost::process:
https://www.boost.org/doc/libs/1_65_0/doc/html/boost_process/tutorial.html
Any idea, how to get that exit code?
You should be able to get the exit code simply by checking the return value:
int ec = bp::system("g++ main.cpp");
The overload taking an std::error_code is only for handling the edge case of g++ not existing in the first place (so it can never start the executable, and therefore there is no exit code). If you do not use that function, it will throw an exception on failure instead.1
try {
int ec = bp::system("g++ main.cpp");
// Do something with ec
} catch (bp::process_error& exception) {
// g++ doesn't exist at all
}
A cleaner way to do this would be to resolve g++ yourself first by searching the $PATH environment variable (just like your shell would):
auto binary_path = bp::search_path(`g++`);
if (binary_path.empty()) {
// g++ doesn't exist
} else {
int ec = bp::system(binary_path, "main.cpp");
}
1 Note, however, that C++0x is C++11, just before it was formally standardized, and that it is very likely your standard library will support std::error_code even if you tell it to use C++0x.

Can this use of C++ exceptions justified

I have a C++ API which throws exceptions in error conditions. Usually, the method I have seen in C++ APIs to notify errors is by special return codes and functions which returns last error string which can be checked if method returns an error code. This method has limitations, for example if you need to return an integer from a function and the whole integer value range is valid for return values so you can't return an error code.
Due to this, I choose to throw an exception in my API when an error occurs in a function.
Is this an acceptable usage of exceptions in C++?
Also, In some functions in my API (eg. authenticate()), I have two options.
return bool to indicate success.
return void and throw an exception if failed.
If first option is used, it is not consistent with other functions because they all throw exceptions. Also, it is difficult to indicate what is the error.
So is it ok to use second method in such functions too?
In following answer, it is mentioned that it is bad to use C++ exceptions for controlling program flow. Also, I have heard the same elsewhere too.
https://stackoverflow.com/a/1736162/1015678
Does my usage violates this? I cannot clearly identify whether I am using exceptions for controlling program flow here.
the method I have seen in C++ APIs to notify errors is by special return codes and functions which returns last error string which can be checked if method returns an error code.
Sometimes that's done for good reasons, but more often when you see that the C++ library wraps an older C library, has been written by someone more comfortable with C, written for client coders more comfortable with C, or is written for interoperability with C.
return an integer from a function and the whole integer value range is valid for return values so you can't return an error code.
Options include:
exceptions
returning with a wider type (e.g. getc() returns an int with -1 indicating EOF).
returning a success flag alongside the value, wrapped in a boost::optional, pair, tuple or struct
having at least one of the success flag and/or value owned by the caller and specified to the function as a non-const by-reference or by-pointer parameter
Is this an acceptable usage of exceptions in C++?
Sounds ok, but the art is in balancing the pros and cons and we don't know whether it's optimally convenient and robust for client code calling your functions. Understanding their expectations in key, which will partly be formed based on their overall C++ experience, but also from the rest of your API and any other APIs shipped alongside yours, and even from other APIs for other libraries they're likely to be using in the same apps etc..
Consider too whether the caller of a function is likely to want to handle the success or failure of that function in the context of the call, separately from other possible failures. For example, sometimes it's easier for client code to work with functions returning boolean success values:
if (fn(1) && !fn(2))
fn(3);
try
{
fn(1);
try
{
fn2();
}
catch (const ExpectedExceptionFromFn2Type&)
{
fn3();
}
}
catch (const PossibleExceptionFromFn1Type&)
{
// that's ok - we don't care...
}
But other times it can be easier with exceptions:
try
{
My_X x { get_X(99) };
if (x.is_happy(42))
x += next_prime_after(x.to_int() * 3);
}
catch (std::exception& e)
{
std::cerr << "oops\n";
}
...compared to...
bool success;
My_X x;
if (get_X(&x, 99)) {
if (x.is_valid() {
bool happy;
if (x.can_get_happy(&happy, 42) && happy) {
int my_int;
if (x.can_convert_to_int(&my_int)) {
if (!x.add(next_prime_after(x.to_int() * 3))) {
std::cerr << "blah blah\n";
return false;
} else { cerr / return false; }
} else { cerr / return false; }
} else { cerr / return false; }
} else { cerr / return false; }
} else { cerr / return false; }
(Exactly how bad it gets depends on whether functions support reporting an error, or can be trusted to always work. That's difficult too, because it something happens that makes it possible for a function to start failing (e.g. it starts using a data file that could potentially be missing), if client code didn't already accept and check an error code or catch exceptions, then that client code may need to be reworked once the potential for errors is recognised. That's less true for exceptions, which - when you're lucky - may propagate to some already-suitable client catch statement, but on the other hand it's a risky assuming so without at least eyeballing the client code.)
Once you've considered whatever you know about client usage, there may still be some doubt about which approach is best: often you can just pick something and stick to it throughout your API, but occasionally you may want to offer multiple versions of a function, e.g.:
bool can_authenticate();
void authenticate_or_throw();
...or...
enum Errors_By { Return_Value, Exception };
bool authenticate(Errors_By e) { ... if (e == Exception) throw ...; return ...; }
...or...
template <class Error_Policy>
struct System
{
bool authenticate() { ... Error_Policy::return_or_throw(...); }
...
}
Also, In some functions in my API (eg. authenticate()), I have two options.
As above, you have more than 2 options. Anyway, consistency is very important. It sounds like exceptions are appropriate.
mentioned that it is bad to use C++ exceptions for controlling program flow
That is precisely what exceptions do and all they can be used for, but I do understand what you mean. Ultimately, striking the right balance is an art that comes with having used a lot of other software, considering other libraries your clients will be using alongside yours etc.. That said, if something is an "error" in some sense, it's at least reasonable to consider exceptions.
For something like authenticate(), I'd expect you to return a bool if you were able to compute a true/false value for the authentication, and throw an exception if something prevented you from doing that. The comment about using exceptions for flow control is suggesting NOT doing something like:
try {
...
authenticate();
// rely on the exception to not execute the rest of the code.
...
} catch (...) { ... }
For instance, I can imagine an authenticate() method that relies on contacting some service, and if you can't communicate with that service for some reason, you don't know if the credentials are good or bad.
Then again, the other major rule of thumb for APIs is "be consistent". If the rest of the API relies on exceptions to serve as the false value in similar cases, use that, but to me, it's a little on the ugly side. I'd lean toward reserving exceptions for the exceptional case - i.e. rare, shouldn't ever happen during normal operations, cases.

Is it evil to redefine assert?

Is it evil to redefine the assert macro?
Some folks recommend using your own macro ASSERT(cond) rather than redefining the existing, standard, assert(cond) macro. But this does not help if you have a lot of legacy code using assert(), that you don't want to make source code changes to, that you want to intercept, regularize, the assertion reporting of.
I have done
#undef assert
#define assert(cond) ... my own assert code ...
in situations such as the above - code already using assert, that I wanted to extend the assert-failing behavior of - when I wanted to do stuff like
1) printing extra error information to make the assertions more useful
2) automatically invoking a debugger or stack track on an assert
... this, 2), can be done without redefining assert, by implementing a SIGABRT signal handler.
3) converting assertion failures into throws.
... this, 3), cannot be done by a signal handler - since you can't throw a C++ exception from a signal handler. (At least not reliably.)
Why might I want to make assert throw? Stacked error handling.
I do this latter usually not because I want the program to continue running after the assertion (although see below), but because I like using exceptions to provide better context on errors. I often do:
int main() {
try { some_code(); }
catch(...) {
std::string err = "exception caught in command foo";
std::cerr << err;
exit(1);;
}
}
void some_code() {
try { some_other_code(); }
catch(...) {
std::string err = "exception caught when trying to set up directories";
std::cerr << err;
throw "unhandled exception, throwing to add more context";
}
}
void some_other_code() {
try { some_other2_code(); }
catch(...) {
std::string err = "exception caught when trying to open log file " + logfilename;
std::cerr << err;
throw "unhandled exception, throwing to add more context";
}
}
etc.
I.e. the exception handlers add a bit more error context, and then rethrow.
Sometimes I have the exception handlers print, e.g. to stderr.
Sometimes I have the exception handlers push onto a stack of error messages.
(Obviously that won't work when the problem is running out of memory.)
** These assert exceptions still exit ... **
Somebody who commented on this post, #IanGoldby, said "The idea of an assert that doesn't exit doesn't make any sense to me."
Lest I was not clear: I usually have such exceptions exit. But eventually, perhaps not immediately.
E.g. instead of
#include <iostream>
#include <assert.h>
#define OS_CYGWIN 1
void baz(int n)
{
#if OS_CYGWIN
assert( n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin." );
#else
std::cout << "I know how to do baz(n) most places, and baz(n), n!=1 on Cygwin, but not baz(1) on Cygwin.\n";
#endif
}
void bar(int n)
{
baz(n);
}
void foo(int n)
{
bar(n);
}
int main(int argc, char** argv)
{
foo( argv[0] == std::string("1") );
}
producing only
% ./assert-exceptions
assertion "n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin."" failed: file "assert-exceptions.cpp", line 9, function: void baz(int)
/bin/sh: line 1: 22180 Aborted (core dumped) ./assert-exceptions/
%
you might do
#include <iostream>
//#include <assert.h>
#define assert_error_report_helper(cond) "assertion failed: " #cond
#define assert(cond) {if(!(cond)) { std::cerr << assert_error_report_helper(cond) "\n"; throw assert_error_report_helper(cond); } }
//^ TBD: yes, I know assert needs more stuff to match the definition: void, etc.
#define OS_CYGWIN 1
void baz(int n)
{
#if OS_CYGWIN
assert( n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin." );
#else
std::cout << "I know how to do baz(n) most places, and baz(n), n!=1 on Cygwin, but not baz(1) on Cygwin.\n";
#endif
}
void bar(int n)
{
try {
baz(n);
}
catch(...) {
std::cerr << "trying to accomplish bar by baz\n";
throw "bar";
}
}
void foo(int n)
{
bar(n);
}
int secondary_main(int argc, char** argv)
{
foo( argv[0] == std::string("1") );
}
int main(int argc, char** argv)
{
try {
return secondary_main(argc,argv);
}
catch(...) {
std::cerr << "main exiting because of unknown exception ...\n";
}
}
and get the slightly more meaningful error messages
assertion failed: n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin."
trying to accomplish bar by baz
main exiting because of unknown exception ...
I should not have to explain why these context sensitive error messages can be more meaningful.
E.g. the user may not have the slightest idea why baz(1) is being called.
It may well ne a pogram error - on cygwin, you may have to call cygwin_alternative_to_baz(1).
But the user may understand what "bar" is.
Yes: this is not guaranteed to work. But, for that matter, asserts are not guaranteed to work, if they do anything more complicated than calling in the abort handler.
write(2,"error baz(1) has occurred",64);
and even that is not guaranteed to work (there's a secure bug in this invocation.)
E.g. if malloc or sbrk has failed.
Why might I want to make assert throw? Testing
The other big reason that I have occasionally redefined assert has been to write unit tests for legacy code, code that uses assert to signal errors, which I am not allowed to rewrite.
If this code is library code, then it is convenient to wrap calls via try/catch. See if the error is detected, and go on.
Oh, heck, I might as well admit it: sometimes I wrote this legacy code. And I deliberately used assert() to signal errors. Because I could not rely on the user doing try/catch/throw - in fact, oftentimes the same code must be used in a C/C++ environment. I did not want to use my own ASSERT macro - because, believe it or not, ASSERT often conflicts. I find code that is littered with FOOBAR_ASSERT() and A_SPECIAL_ASSERT() ugly. No... simply using assert() by itself is elegant, works basically. And can be extended.... if it is okay to override assert().
Anyway, whether the code that uses assert() is mine or from someone else: sometimes you want code to fail, by calling SIGABRT or exit(1) - and sometimes you want it to throw.
I know how to test code that fails by exit(a) or SIGABRT - something like
for all tests do
fork
... run test in child
wait
check exit status
but this code is slow. Not always portable. And often runs several thousand times slower
for all tests do
try {
... run test in child
} catch (... ) {
...
}
This is a riskier than just stacking error message context since you may continue operating. But you can always choose types of exceptions to cactch.
Meta-Observation
I am with Andrei Alexandresciu in thinking that exceptions are the best known method to report errors in code that wants to be secure. (Because the programmer cannot forget to check an error return code.)
If this is right ... if there is a phase change in error reporting, from exit(1)/signals/ to exceptions ... one still has the question of how to live with the legacy code.
And, overall - there are several error reporting schemes. If different libraries use different schemes, how make them live together.
Redefining a Standard macro is an ugly idea, and you can be sure the behaviour's technically undefined, but in the end macros are just source code substitutions and it's hard to see how it could cause problems, as long as the assertion causes your program to exit.
That said, your intended substitution may not be reliably used if any code in the translation unit after your definition itself redefines assert, which suggests a need for a specific order of includes etc. - damned fragile.
If your assert substitutes code that doesn't exit, you open up new problems. There are pathological edge cases where your ideas about throwing instead could fail, such as:
int f(int n)
{
try
{
assert(n != 0);
call_some_library_that_might_throw(n);
}
catch (...)
{
// ignore errors...
}
return 12 / n;
}
Above, a value of 0 for n starts crashing the application instead of stopping it with a sane error message: any explanation in the thrown message won't be seen.
I am with Andrei Alexandresciu in thinking that exceptions are the best known method to report errors in code that wants to be secure. (Because the programmer cannot forget to check an error return code.)
I don't recall Andrei saying quite that - do you have a quote? He's certainly thought very carefully about how to create objects that encourage reliable exception handling, but I've never heard/seen him suggest that a stop-the-program assert is inappropriate in certain cases. Assertions are a normal way of enforcing invariants - there's definitely a line to be drawn concerning which potential assertions can be continued from and which can't, but on one side of that line assertions continue to be useful.
The choice between returning an error value and using exceptions is the traditional ground for the kind of argument/preference you mention, as they're more legitimately alternatives.
If this is right ... if there is a phase change in error reporting, from exit(1)/signals/ to exceptions ... one still has the question of how to live with the legacy code.
As above, you shouldn't try to migrate all existing exit() / assert etc. to exceptions. In many cases, there will be no way to meaningfully continue processing, and throwing an exception just creates doubt about whether the issue will be recorded properly and lead to the intended termination.
And, overall - there are several error reporting schemes. If different libraries use different schemes, how make them live together.
Where that becomes a real issue, you'd generally select one approach and wrap the non-conforming libraries with a layer that provides the error handling you like.
I wrote an application that runs on an embedded system. In the early days I sprinkled asserts through the code liberally, ostensibly to document conditions in the code that should be impossible (but in a few places as lazy error-checking).
It turned out that the asserts were occasionally being hit, but no one ever got to see the message output to the console containing the file and line number, because the console serial port generally was not connected to anything. I later redefined the assert macro so that instead of outputting a message to the console it would send a message over the network to the error logger.
Whether or not you think redefining assert is 'evil', this works well for us.
If you include any headers/libraries that utilize assert, then you would experience unexpected behavior, otherwise the compiler allows you to do it so you can do it.
My suggestion, which is based on personal opinion is that in any case you can define your own assert without the need to redefine the existing one. You are never gaining extra benefit from redefining the existing one over defining a new one with a new name.