I am very new to programming and am confused about what void does, I know that when you put void in front of a function it means that "it returns nothing" but if the function returns nothing then what is the point of writing the function?? Anyway, I got this question on my homework and am trying to answer it but need some help with the general concept along with it. any help would be great, and please try to avoid technical lingo, I'm a serious newb here.
What does this function accomplish?
void add2numbers(double a, double b)
{
double sum;
sum = a + b;
}
void ReturnsNothing()
{
cout << "Hello!";
}
As you can see, this function returns nothing, but that doesn't mean the function does nothing.
A function is nothing more than a refactoring of the code to put commonly-used routines together. If I'm printing "Hello" often, I put the code that prints "Hello" in a function. If I'm calculating the sum of two numbers, I'll put the code to do that and return the result in a function. It's all about what you want.
There are loads of reasons to have void functions, some of these are having 'non pure' side effects:
int i=9;
void f() {
++i;
}
In this case i could be global or a class data member.
The other is observable effects
void f() {
std::cout <<"hello world" << std::endl;
}
A void function may act on a reference or pointer value.
void f(int& i) {
++i;
}
It could also throw, although don't do this for flow control.
void f() {
while(is_not_broke()) {
//...
}
throw std::exception(); //it broke
}
The purpose of a void function is to achieve a side effect (e.g., modify a reference parameter or a global variable, perform system calls such as I/O, etc.), not return a value.
The use of the term function in the context of C/C++ is rather confusing, because it disagrees wiht the mathematical concept of a function as "something returning a value". What C/C++ calls functions returning void corresponds to the concept of a procedure in other languages.
The major difference between a function and a procedure is that a function call is an expression, while a procedure call is a statement While functions are invoked for their return value, procedures are invoked for their side effects (such as producing output, changing state, and so on).
A function with void return value can be useful for its side effects. For example consider the standard library function exit:
void exit(int status)
This function doesn't return any value to you, but it's still useful for its side-effect of terminating the process.
You are on the right lines - the function doesn't accomplish anything, because it calculates something but that something then gets thrown away.
Functions returning void can be useful because they can have "side effects". This means something happens that isn't an input or output of the function. For example it could write to a file, or send an email.
Function is a bit of a missnomer in this case; perhaps calling it a method is better. You can call a method on an object to change its state, i.e. the values of it's fields (or properties). So you might have an object with properites for x and y coordinates and a method called Move which takes parameters xDelta and yDelta.
Calling Move with 2, 3 will cause 2 to be added to your X property and 3 to be added to your Y property. So the state of the object has changed and it wouldn't have made musch sense for Move to have returned a value.
That function achieves nothing - but if you had written
void add2numbers(double a, double b, double &sum)
{
sum = a + b;
}
It would give you the sum, whether it's easier to return a value or use a parameter depends on the function
Typically you would use a parameter if there are multiple results but suppose you had a maths routine where an answer might not be possible.
bool sqrt(double value, double &answer)
{
if value < 0.0 ) {
return false;
} else {
answer = real_sqrt_function(value);
return true;
}
}
I currently use a visualization library called VTK. I normally write void functions to update some part of the graphics that are displayed to the screen. I also use void functions to handle GUI interaction within Qt. For example, if you click a button, some text gets updated on the GUI.
You're completely right: calculating a function that returns nothing is meaningless – if you're talking about mathematical functions. But like with many mathematical concepts, "functions" are in many programming languages only related to mathematical functions, but behave more or less subtly different.
I believe it's good to explain it with a language that does not get it wrong: one such language is Haskell. That's a purely functional language which means a Haskell function is also a mathematical function. Indeed you can write Haskell functions much more mathematical-styled, e.g.
my_tan(x) = sin(x)/cos(x) -- or (preferred): tan' x = sin x / cos x
than in C++
double my_tan(double x) { return sin(x)/cos(x); }
However, in computer programs you don't just want to calculate functions, do you? You also want to get stuff done, like displaying something on your screen, sending data over the network, reading values from sensors etc.. In Haskell, things like these are well separated from pure functions, they all act in the so-called IO monad. For instance, the function putStrLn, which prints a line of characters, has type String -> IO(). Meaning, it takes a String as its argument and returns an IO action which prints out that string when invoked from the main function, and nothing else (the () parens are roughly what's void in C++).
This way of doing IO has many benefits, but most programming languages are more sloppy: they allow all functions to do IO, and also to change the internal state of your program. So in C++, you could simply have a function void putStrLn(std::string), which also "returns" an IO action that prints the string and nothing else, but does not explicitly tell you so. The benefit is that you don't need to tie multiple knots in your brain when thinking about what the IO monad actually is (it's rather roundabout). Also, many algorithms can be implemented to run faster if you have the ability to actually tell the machine "do this sequence of processes, now!" rather than just asking for the result of some computation in the IO monad.
Related
I have a legacy interface that has a function with a signature that looks like the following:
int provide_values(int &x, int &y)
x and y are considered output parameters in this function. Note: I'm aware of the drawbacks of using output parameters and that there are better design choices for such an interface. I'm not trying to debate the merits of this interface.
Within the implementation of this function, it first checks to see if the addresses of the two output parameters are the same, and returns an error code if they are.
if (&x == &y) {
return -1; // Error: both output parameters are the same variable
}
Is there a way at compile time to prevent callers of this function from providing the same variable for the two output parameters without having such a check within the body of the function? I'm thinking of something similar to the restrict keyword in C, but that only is a signal to the compiler for optimization, and only provides a warning when compiling code that calls such a function with the same pointer.
No, there's not. Keep in mind that the calling code could derive x and y from references returned from some arbitrary black-box functions. But even otherwise, it is provably impossible (by the Incompleteness Theorem) for the compiler to robustly determine whether they point to the same object, since what objects they are bound to is determined by the execution of the program.
If all you want to do is preventing that the user calls provide_values(xyz, xyz), you can use a macro as in the following example. However, this won't protect the user from calling provide_values(xyz, reference_to_xyz), so the whole this is probably pointless anyway.
#include <cstring>
void provide_values(int&, int&) {}
#define PROV_VAL(x, y) if (strcmp((#x),(#y))) { provide_values(x, y); } else { throw -1; }
int main()
{
int x;
int y;
PROV_VAL(x,y);
//PROV_VAL(x,x); // this throws
int& z = x;
PROV_VAL(x,z); // this passes though!
}
Consider the following code:
file_1.hpp:
typedef void (*func_ptr)(void);
func_ptr file1_get_function(void);
file1.cpp:
// file_1.cpp
#include "file_1.hpp"
static void some_func(void)
{
do_stuff();
}
func_ptr file1_get_function(void)
{
return some_func;
}
file2.cpp
#include "file1.hpp"
void file2_func(void)
{
func_ptr function_pointer_to_file1 = file1_get_function();
function_pointer_to_file1();
}
While I believe the above example is technically possible - to call a function with internal linkage only via a function pointer, is it bad practice to do so? Could there be some funky compiler optimizations that take place (auto inline, for instance) that would make this situation problematic?
There's no problem, this is fine. In fact , IMHO, it is a good practice which lets your function be called without polluting the space of externally visible symbols.
It would also be appropriate to use this technique in the context of a function lookup table, e.g. a calculator which passes in a string representing an operator name, and expects back a function pointer to the function for doing that operation.
The compiler/linker isn't allowed to make optimizations which break correct code and this is correct code.
Historical note: back in C89, externally visible symbols had to be unique on the first 6 characters; this was relaxed in C99 and also commonly by compiler extension.
In order for this to work, you have to expose some portion of it as external and that's the clue most compilers will need.
Is there a chance that there's a broken compiler out there that will make mincemeat of this strange practice because they didn't foresee someone doing it? I can't answer that.
I can only think of false reasons to want to do this though: Finger print hiding, which fails because you have to expose it in the function pointer decl, unless you are planning to cast your way around things, in which case the question is "how badly is this going to hurt".
The other reason would be facading callbacks - you have some super-sensitive static local function in module m and you now want to expose the functionality in another module for callback purposes, but you want to audit that so you want a facade:
static void voodoo_function() {
}
fnptr get_voodoo_function(const char* file, int line) {
// you tagged the question as C++, so C++ io it is.
std::cout << "requested voodoo function from " << file << ":" << line << "\n";
return voodoo_function;
}
...
// question tagged as c++, so I'm using c++ syntax
auto* fn = get_voodoo_function(__FILE__, __LINE__);
but that's not really helping much, you really want a wrapper around execution of the function.
At the end of the day, there is a much simpler way to expose a function pointer. Provide an accessor function.
static void voodoo_function() {}
void do_voodoo_function() {
// provide external access to voodoo
voodoo_function();
}
Because here you provide the compiler with an optimization opportunity - when you link, if you specify whole program optimization, it can detect that this is a facade that it can eliminate, because you let it worry about function pointers.
But is there a really compelling reason not just to remove the static from infront of voodoo_function other than not exposing the internal name for it? And if so, why is the internal name so precious that you would go to these lengths to hide that?
static void ban_account_if_user_is_ugly() {
...;
}
fnptr do_that_thing() {
ban_account_if_user_is_ugly();
}
vs
void do_that_thing() { // ban account if user is ugly
...
}
--- EDIT ---
Conversion. Your function pointer is int(*)(int) but your static function is unsigned int(*)(unsigned int) and you don't want to have to cast it.
Again: Just providing a facade function would solve the problem, and it will transform into a function pointer later. Converting it to a function pointer by hand can only be a stumbling block for the compiler's whole program optimization.
But if you're casting, lets consider this:
// v1
fnptr get_fn_ptr() {
// brute force cast because otherwise it's 'hassle'
return (fnptr)(static_fn);
}
int facade_fn(int i) {
auto ui = static_cast<unsigned int>(i);
auto result = static_fn(ui);
return static_cast<int>(result);
}
Ok unsigned to signed, not a big deal. And then someone comes along and changes what fnptr needs to be to void(int, float);. One of the above becomes a weird runtime crash and one becomes a compile error.
So I ran across this (IMHO) very nice idea of using a composite structure of a return value and an exception - Expected<T>. It overcomes many shortcomings of the traditional methods of error handling (exceptions, error codes).
See the Andrei Alexandrescu's talk (Systematic Error Handling in C++) and its slides.
The exceptions and error codes have basically the same usage scenarios with functions that return something and the ones that don't. Expected<T>, on the other hand, seems to be targeted only at functions that return values.
So, my questions are:
Have any of you tried Expected<T> in practice?
How would you apply this idiom to functions returning nothing (that is, void functions)?
Update:
I guess I should clarify my question. The Expected<void> specialization makes sense, but I'm more interested in how it would be used - the consistent usage idiom. The implementation itself is secondary (and easy).
For example, Alexandrescu gives this example (a bit edited):
string s = readline();
auto x = parseInt(s).get(); // throw on error
auto y = parseInt(s); // won’t throw
if (!y.valid()) {
// ...
}
This code is "clean" in a way that it just flows naturally. We need the value - we get it. However, with expected<void> one would have to capture the returned variable and perform some operation on it (like .throwIfError() or something), which is not as elegant. And obviously, .get() doesn't make sense with void.
So, what would your code look like if you had another function, say toUpper(s), which modifies the string in-place and has no return value?
Have any of you tried Expected; in practice?
It's quite natural, I used it even before I saw this talk.
How would you apply this idiom to functions returning nothing (that is, void functions)?
The form presented in the slides has some subtle implications:
The exception is bound to the value.
It's ok to handle the exception as you wish.
If the value ignored for some reasons, the exception is suppressed.
This does not hold if you have expected<void>, because since nobody is interested in the void value the exception is always ignored. I would force this as I would force reading from expected<T> in Alexandrescus class, with assertions and an explicit suppress member function. Rethrowing the exception from the destructor is not allowed for good reasons, so it has to be done with assertions.
template <typename T> struct expected;
#ifdef NDEBUG // no asserts
template <> class expected<void> {
std::exception_ptr spam;
public:
template <typename E>
expected(E const& e) : spam(std::make_exception_ptr(e)) {}
expected(expected&& o) : spam(std::move(o.spam)) {}
expected() : spam() {}
bool valid() const { return !spam; }
void get() const { if (!valid()) std::rethrow_exception(spam); }
void suppress() {}
};
#else // with asserts, check if return value is checked
// if all assertions do succeed, the other code is also correct
// note: do NOT write "assert(expected.valid());"
template <> class expected<void> {
std::exception_ptr spam;
mutable std::atomic_bool read; // threadsafe
public:
template <typename E>
expected(E const& e) : spam(std::make_exception_ptr(e)), read(false) {}
expected(expected&& o) : spam(std::move(o.spam)), read(o.read.load()) {}
expected() : spam(), read(false) {}
bool valid() const { read=true; return !spam; }
void get() const { if (!valid()) std::rethrow_exception(spam); }
void suppress() { read=true; }
~expected() { assert(read); }
};
#endif
expected<void> calculate(int i)
{
if (!i) return std::invalid_argument("i must be non-null");
return {};
}
int main()
{
calculate(0).suppress(); // suppressing must be explicit
if (!calculate(1).valid())
return 1;
calculate(5); // assert fails
}
Even though it might appear new for someone focused solely on C-ish languages, to those of us who had a taste of languages supporting sum-types, it's not.
For example, in Haskell you have:
data Maybe a = Nothing | Just a
data Either a b = Left a | Right b
Where the | reads or and the first element (Nothing, Just, Left, Right) is just a "tag". Essentially sum-types are just discriminating unions.
Here, you would have Expected<T> be something like: Either T Exception with a specialization for Expected<void> which is akin to Maybe Exception.
Like Matthieu M. said, this is something relatively new to C++, but nothing new for many functional languages.
I would like to add my 2 cents here: part of the difficulties and differences are can be found, in my opinion, in the "procedural vs. functional" approach. And I would like to use Scala (because I am familiar both with Scala and C++, and I feel it has a facility (Option) which is closer to Expected<T>) to illustrate this distinction.
In Scala you have Option[T], which is either Some(t) or None.
In particular, it is also possible to have Option[Unit], which is morally equivalent to Expected<void>.
In Scala, the usage pattern is very similar and built around 2 functions: isDefined() and get(). But it also have a "map()" function.
I like to think of "map" as the functional equivalent of "isDefined + get":
if (opt.isDefined)
opt.get.doSomething
becomes
val res = opt.map(t => t.doSomething)
"propagating" the option to the result
I think that here, in this functional style of using and composing options, lies the answer to your question:
So, what would your code look like if you had another function, say toUpper(s), which modifies the string in-place and has no return value?
Personally, I would NOT modify the string in place, or at least I will not return nothing. I see Expected<T> as a "functional" concept, that need a functional pattern to work well: toUpper(s) would need to either return a new string, or return itself after modification:
auto s = toUpper(s);
s.get(); ...
or, with a Scala-like map
val finalS = toUpper(s).map(upperS => upperS.someOtherManipulation)
if you don't want to follow a functional route, you can just use isDefined/valid and write your code in a more procedural way:
auto s = toUpper(s);
if (s.valid())
....
If you follow this route (maybe because you need to), there is a "void vs. unit" point to make: historically, void was not considered a type, but "no type" (void foo() was considered alike a Pascal procedure). Unit (as used in functional languages) is more seen as a type meaning "a computation". So returning a Option[Unit] does make more sense, being see as "a computation that optionally did something". And in Expected<void>, void assumes a similar meaning: a computation that, when it does work as intended (where there are no exceptional cases), just ends (returning nothing). At least, IMO!
So, using Expected or Option[Unit] could be seen as computations that maybe produced a result, or maybe not. Chaining them will prove it difficult:
auto c1 = doSomething(s); //do something on s, either succeed or fail
if (c1.valid()) {
auto c2 = doSomethingElse(s); //do something on s, either succeed or fail
if (c2.valid()) {
...
Not very clean.
Map in Scala makes it a little bit cleaner
doSomething(s) //do something on s, either succeed or fail
.map(_ => doSomethingElse(s) //do something on s, either succeed or fail
.map(_ => ...)
Which is better, but still far from ideal. Here, the Maybe monad clearly wins... but that's another story..
I've been pondering the same question since I've watched this video. And so far I didn't find any convincing argument for having Expected, for me it looks ridiculous and against clarity&cleanness. I have come up with the following so far:
Expected is good since it has either value or exceptions, we not forced to use try{}catch() for every function which is throwable. So use it for every throwing function which has return value
Every function that doesn't throw should be marked with noexcept. Every.
Every function that returns nothing and not marked as noexcept should be wrapped by try{}catch{}
If those statements hold then we have self-documented easy to use interfaces with only one drawback: we don't know what exceptions could be thrown without peeking into implementation details.
Expected impose some overheads to the code since if you have some exception in the guts of your class implementation(e.g. deep inside private methods) then you should catch it in your interface method and return Expected. While I think it is quite tolerable for the methods which have a notion for returning something I believe it brings mess and clutter to the methods which by design have no return value. Besides for me it is quite unnatural to return thing from something that is not supposed to return anything.
It should be handled with compiler diagnostics. Many compilers already emit warning diagnostics based on expected usages of certain standard library constructs. They should issue a warning for ignoring an expected<void>.
I am developing a library of some utility functions in C++. I have a doubt regarding the function signatures in that library. If a function takes some parameters and returns a value, should the variable into which the result of that function is stored also be passed as a parameter to that function? How should I handle the error conditions and return values for errors?
For C++ you should return the result and handle errors with exceptions.
int calc_with_error() {
throw yourExceptionClass("Message");
}
int calc() {
return 5;
}
int main() {
int tmp=calc();
cout << calc;
}
But the result then is copied from the function to the calling context. With primitive datatypes this is the fastest possible way. But when you have complex datastructures, it can be faster to pass a reference to a result parameter - although it's not as clean code as the solution above, An example would be:
void calc(vector<int> &result) {
result.clean();
result.add(5);
}
int main() {
vector<int> tmp;
calc(tmp);
//Do something with the vector
}
There are several options, and it's largely a matter of preference.
One thing you should do is, in most cases, keep outputs and errors separate.
It's usually good to return success/error as the return value, and return data in an output parameter, passed by reference.
Don't do these:
1. Use "magic values" as an error indication.
2. Use global variables to return the data.
People often tell me to not return error values, because this is not the very best practice. The best is to you throw exceptions, this is best handled than error codes. Also, output parameters are good, I use them most for big data, for simple returns, the return value should be of good use.
To show you, of course, this is not so good in design:
void checkSomething(bool& output)
{
output = doCheckages();
}
that is much better
bool checkSomething()
{
return doCheckages();
}
but if youre handling a large class/structure, and you know that you dont want to have lots of instance of it, may be better to pass it as a output param.
I have a setup that looks like this.
class Checker
{ // member data
Results m_results; // see below
public:
bool Check();
private:
bool Check1();
bool Check2();
// .. so on
};
Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below)
bool Checker::Check()
{ // initilisations etc.
Check1();
Check2();
// ... so on
}
A typical Check function would look like this:
bool Checker::Check1()
{ double result;
// lots of code
m_results.SetCheck1Result(result);
}
And the results class looks something like this:
class Results
{ double m_check1Result;
double m_check2Result;
// ...
public:
void SetCheck1Result(double d);
double GetOverallResult()
{ return max(m_check1Result, m_check2Result, ...); }
};
Note: all code is oversimplified.
The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either:
Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR
In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check().
The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc.
The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding?
Is there a better 3rd option?
I don't think this is a good idea. Exceptions should be limited to, well, exceptional situations. Yours is a question of normal control flow.
It seems you could very well move all the redundant code dealing with the result out of the checks and into the calling function. The resulting code would be cleaner and probably much easier to understand than non-exceptional exceptions.
Change your CheckX() functions to return the double they produce and leave dealing with the result to the caller. The caller can more easily do this in a way that doesn't involve redundancy.
If you want to be really fancy, put those functions into an array of function pointers and iterate over that. Then the code for dealing with the results would all be in a loop. Something like:
bool Checker::Check()
{
for( std::size_t id=0; idx<sizeof(check_tbl)/sizeof(check_tbl[0]); ++idx ) {
double result = check_tbl[idx]();
if( result > 1 )
return false; // or whichever way your logic is (an enum might be better)
}
return true;
}
Edit: I had overlooked that you need to call any of N SetCheckResultX() functions, too, which would be impossible to incorporate into my sample code. So either you can shoehorn this into an array, too, (change them to SetCheckResult(std::size_t idx, double result)) or you would have to have two function pointers in each table entry:
struct check_tbl_entry {
check_fnc_t checker;
set_result_fnc_t setter;
};
check_tbl_entry check_tbl[] = { { &Checker::Check1, &Checker::SetCheck1Result }
, { &Checker::Check2, &Checker::SetCheck2Result }
// ...
};
bool Checker::Check()
{
for( std::size_t id=0; idx<sizeof(check_tbl)/sizeof(check_tbl[0]); ++idx ) {
double result = check_tbl[idx].checker();
check_tbl[idx].setter(result);
if( result > 1 )
return false; // or whichever way your logic is (an enum might be better)
}
return true;
}
(And, no, I'm not going to attempt to write down the correct syntax for a member function pointer's type. I've always had to look this up and still never ot this right the first time... But I know it's doable.)
Exceptions are meant for cases that shouldn't happen during normal operation. They're hardly non-intrusive; their very nature involves unwinding the call stack, calling destructors all over the place, yanking the control to a whole other section of code, etc. That stuff can be expensive, depending on how much of it you end up doing.
Even if it were free, though, using exceptions as a normal flow control mechanism is a bad idea for one other, very big reason: exceptions aren't meant to be used that way, so people don't use them that way, so they'll be looking at your code and scratching their heads trying to figure out why you're throwing what looks to them like an error. Head-scratching usually means you're doing something more "clever" than you should be.