I have a code in which user has to pass > 0 number otherwise this code will throw. Using type of this arg as std::size_t doesn't work for the reason that negative numbers give large positive numbers. Is it good practice if I use signed type or are there other ways to enforce it?
void f(std::size_t number)
{
//if -1 is passed I'm (for obvious reason) get large positive number
}
I don't think there's a definitive correct answer to this question. You can take an look to Scott Meyers's opinion on the subject :
One problem is that unsigned types
tend to decrease your ability to
detect common programming errors.
Another is that they often increase
the likelihood that clients of your
classes will use the classes
incorrectly.
In the end, the question to ask is really : do you need the extra possible values provided by the unsigned type ?
A lot depends on what type of argument you imagine your clients trying to pass. If they're passing int, and that's clearly big enough to hold the range of values you're going to use, then there's no practical advantage to using std::size_t - it won't enforce anything, and the way the issue manifests as an apparently huge number is simply more confusing.
BUT - it is good to use size_t anyway as it helps document the expectations of the API.
You clearly can't do a compile-time check for "> 0" against a run-time generated value, but can at least disambiguate negative inputs from intentional huge numbers ala
template <typename T>
void f(T t)
{
if (!(t > 0))
throw std::runtime_error("be positive!");
// do stuff with t, knowing it's not -1 shoehorned into size_t...
...
}
But, if you are really concerned about this, you could provide overloads:
// call me please e.g. f(size_t(10));
void f(size_t);
// unimplemented (and private if possible)...
// "want to make sure you realise this is unsigned: call f(size_t) explicitly
void f(int32_t);
void f(int64_t);
...then there's a compile-time error leading to the comments re caller explicitly providing a size_t argument (casting if necessary). Forcing the client to provide an arg of size_t type is a pretty good way to make sure they're conscious of the issue.
Rin's got a good idea too - would work really well where it works at all (depends on there being an signed int type larger than size_t). Go check it out....
EDIT - demonstration of template idea above...
#include <iostream>
template <typename T>
void f(T t)
{
if (!(t > 0))
std::cout << "bad call f(" << (int)t << ")\n";
else
std::cout << "good f(" << (int)t << ")\n";
}
int main()
{
f((char)-1);
f((unsigned char)255);
}
I had the same problems you're having: Malfunctioning type-casting of string to unsigned int
Since, in my case, I'm getting the input from the user, my approach was to read the data as a string and check its contents.
template <class T>
T getNumberInput(std::string prompt, T min, T max) {
std::string input;
T value;
while (true) {
try {
std::cout << prompt;
std::cin.clear();
std::getline(std::cin, input);
std::stringstream sstream(input);
if (input.empty()) {
throw EmptyInput<std::string>(input);
} else if (input[0] == '-' && std::numeric_limits<T>::min() == 0) {
throw InvalidInput<std::string>(input);
} else if ((sstream >> value) && (value >= min) && (value <= max)) {
std::cout << std::endl;
return value;
} else {
throw InvalidInput<std::string>(input);
}
} catch (EmptyInput<std::string> & emptyInput) {
std::cout << "O campo não pode ser vazio!\n" << std::endl;
} catch (InvalidInput<std::string> & invalidInput){
std::cout << "Tipo de dados invãlido!\n" << std::endl;
}
}
}
If your allowed value range for number allows it use the signed std::ptrdiff_t (like Alexey said).
Or use a library like SafeInt and have f declared something like this: void f( SafeInt< std::size_t > i ); which throws if you'll call it with something like f( -1 );.
Fisrt solution
void f(std::ptrdiff_t number) {
if (number < 0) throw;
}
Second solution
void f(std::size_t number) {
if (number > std::numeric_limits<std::size_t>::max()/2) throw;
}
Maybe you should wrap read-function to another function which purpose will be get int and validate it.
EDIT: Ok int was just first idea, so read and parse string
This is one of the situations where you cannot really do much. The compiler usually gives out a warning when converting signed to unsigned data-types, so you will have to trust the caller to heed that warning.
You could test this using bitwise operation, such as the following:
void f(std::size_t number)
{
if(number & (0x1L << (sizeof(std::size_t) * 8 - 1)) != 0)
{
// high bit is set. either someone passed in a negative value,
// or a very large size that we'll assume is invalid.
// error case goes here
}
else
{
// valid value
}
}
This code assumes 8-bit bytes. =)
Yes, large values will fail, but you could document that they are not allowed, if you really need to protect against this.
Who is using this API? Rather than using a solution like this, I would recommend that they fix their code. =) I would think that the "normal" best practice would be for callers to use size_t, and have their compilers complain loudly if they try to put signed values into them.
I had to think about this question a little, and this is what I would do.
If your function has the responsability to throw an exception if you pass a negative number, then your function's signature should accept a signed integer number. That's because if you accept an unsigned number, you won't ever be able to tell unambiguosly if a number is negative and you won't be able to throw an exception. IOW, you want complain to your assignment of throwing exception.
You should establish what is an acceptable input range and use a signed integer large enough to fully contain that range.
Related
I am getting some -Wnarrowing conversion errors when doubles are narrowed to floats. How can I do this in a well defined way? Preferably with an option in a template I can toggle to switch behavior from throwing exceptions, to clamping to the nearest value, or to simple truncation. I was looking at the gsl::narrow cast, but it seems that it just performs a static cast under the hood and a comparison follow up: Understanding gsl::narrow implementation. I would like something that is more robust, as according to What are all the common undefined behaviours that a C++ programmer should know about? static_cast<> is UB if the value is unpresentable in the target type. I also really liked this implementation, but it also relies on a static_cast<>: Can a static_cast<float> from double, assigned to double be optimized away? I do not want to use boost for this. Are there any other options? It's best if this works in c++03, but c++0x(experimental c++11) is also acceptable... or 11 if really needed...
Because someone asked, here's a simple toy example:
#include <iostream>
float doubleToFloat(double num) {
return static_cast<float>(num);
}
int main( int, char**){
double source = 1; // assume 1 could be any valid double value
try{
float dest = doubleToFloat(source);
std::cout << "Source: (" << source << ") Dest: (" << dest << ")" << std::endl;
}
catch( std::exception& e )
{
std::cout << "Got exception error: " << e.what() << std::endl;
}
}
My primary interest is in adding error handling and safety to doubleToFloat(...), with various custom exceptions if needed.
As long as your floating-point types can store infinities (which is extremely likely), there is no possible undefined behavior. You can test std::numeric_limits<float>::has_infinity if you really want to be sure.
Use static_cast to silence the warning, and if you want to check for an overflow, you can do something like this:
template <typename T>
bool isInfinity(T f) {
return f == std::numeric_limits<T>::infinity()
|| f == -std::numeric_limits<T>::infinity();
}
float doubleToFloat(double num) {
float result = static_cast<float>(num);
if (isInfinity(result) && !isInfinity(num)) {
// overflow happened
}
return result;
}
Any double value that doesn't overflow will be converted either exactly or to one of the two nearest float values (probably the nearest). You can explicitly set the rounding direction with std::fesetround.
It depends on what you mean by "safely". There will most likely be a drop of precision in most cases. Do you want detect if this happens? Assert, or just know about it and notify the user?
A possible path of solution would be to static casting the double to a float, and then back to double, and compare the before and after. Equality is unlikely, but you could assert that the loss of precision is within your tolerance.
float doubleToFloat(double a_in, bool& ar_withinSpec, double a_tolerance)
{
auto reducedPrecision = static_cast<float>(a_in);
auto roundTrip = static_cast<double>(reducedPrecision);
ar_withinSpec = (roundTrip < a_tolerance);
return reducedPrecision;
}
I have a problem. I will use a hypothetical example as I do not think that I need to use my actual function as it is kind of complex. This is the example:
int GetNumberTimesTwo(int num)
{
return num * 2;
}
Now, let's assume that if the Number is bigger than two something bad happens. Is there any way how I can force num to be less or equal than 2? Of course, I could do
int GetNumberTimesTwo(int num)
{
if (num > 2)
return;
return num * 2;
}
The problem is that this would be annoying as it just prevents this from happening, but I would like to know about this error before compiling. Meaning, is there something like int num < 2 that I can do?
In my dreams, it could be done like that:
int GetNumberTimesTwo(int num < 2)
{
return num * 2;
}
But as I said, in my dreams, and because I know C++, I know that nothing ever works like I would like it to work, therefore I have to ask what the correct way to do this is.
C++ What would be the best way to set a maximum number for an integer in the function parameters
There are basically two alternatives:
Decide how to handle invalid input at runtime.
Accept a parameter type whose all possible values are valid, thereby making it impossible to pass invalid input.
For 1. there are many alternatives. Here are some:
Simply document the requirement as a precondition and violating the precondition results in undefined behaviour. Assume that input is always valid and don't check for invalid values. This is the most efficient choice, but least safe.
Throw an exception.
Return std::expected (proposed, not yet standard feature) or similar class which contains either result, or error state.
For 2. There are no trivial fool-proof choices. For a small set of valid values, an enum class is reasonably difficult to misuse:
enum class StrangeInput {
zero = 0,
one = 1,
two = 2,
};
int GetNumberTimesTwo(StrangeInput num)
There is no way the compiler knows how to validate this, and if it does what would be the error? stop compiling?
The problem is that you are passing a variable to the function, In the universe of possible values of ints there are for sure values greater than 2 so how would the compiler know that in your execution you are never passing that amount.
This is a typical runtime validation, you should be validating the preconditions in your function and handle these scenarios in case unexpected values comes.
With template and/or constexpr, you might have some kind of check, but requires value known at compile time and possibly restrict available functions to call inside the function:
template <int N>
int GetNumberTimesTwo()
{
static_assert(N <= 2);
return N * 2;
}
constexpr int GetNumberTimesTwo(int num)
{
// only code valid for constexpr
// the restrictions has changed with standard
if (num > 2) throw std::runtime_error("out of range");
return num * 2;
}
constexpr int ok = GetNumberTimesTwo(1);
constexpr int ko = GetNumberTimesTwo(42); // Compile time error
int no_check1 = GetNumberTimesTwo(1); // ok at runtime
int no_check2 = GetNumberTimesTwo(42); // throw at runtime
this is the way to go:
int GetNumberTimesTwo(int num)
{
if (num > 2)
{
return 0;
}
return num * 2;
}
or throw an error:
int GetNumberTimesTwo(int num)
{
if (num < 0)
{
throw std::invalid_argument("count cannot be negative!");
}
return num * 2;
}
I am sure this question has been asked already but I couldn't find the answer.
If I have a function, let's say:
int Power(int number, int degree){
if(degree==0){
return 1;
}
return number*Power(number, degree-1);
}
It works only when the degree is a non-negative int. How can I prevent this function from being called with wrong parameters?
For example, if the programmer writes cout<<Power(2, -1);, I want the compiler to refuse to compile the code and return some kind of an error message (e.g. "function Power accepts only non-negative integers").
Another alternative would be for the function to not return any value in this case. For example:
int Power(int number, unsigned int degree){
if(degree<0){
//return nothing
}
if(degree==0){
return 1;
}
return number*Power(number, degree-1);
}
There is an alternative to returning a value: Throw a value. A typical example:
if(degree<0){
throw std::invalid_argument("degree may not be negative!");
}
I want the compiler to refuse to compilate the code
In general, arguments are unknown until runtime, so this is not typically possible.
Your answer does the job for menicely. But I am curious: 'throw' terminates the program and prevents anything after Power() to be executed.
If you catch the thrown object, then you can continue immediately after the function from which the object was thrown.
The mere fact, that C++ does implicit type conversions, leaves you no way out of the predicament, that if you write unsigned int x = -1;, no matter which warnings you turn on with your compiler, you won't see any problem with that.
The only rule coming to mind, which might help you with that, is the notorious "max zero or one implicit conversions" rule. But I doubt it can be exploited in this case. (-1 would need to be converted to unsigned int, then to another type, implicitly). But I think from what I read on the page I linked above, numeric implicit conversions do not really count under some circumstances.
This leaves you but one other, also imperfect option. In the code below, I outline the basic idea. But there is endless room to refine the idea (more on that, later). This option is to resort to optional types in combination with your own integer type. The code below also only hints to what is possible. All that could be done in some fancy monadic framework or whatnot...
Obviously, in the code, posted in the question, it is a bad idea to have argument degree as an unsigned int, because then, a negative value gets implicitly converted and the function cannot protect itself from the hostile degree 0xFFFFFFFF (max value of unsigned int). If it wanted to check, it had better chosen int. Then it could check for negative values.
The code in the question also calls for a stack overflow, given it does not implement power in a tail recursive way. But this is just an aside and not subject to the question at hand. Let's get that one quickly out of the way.
// This version at least has a chance to benefit from tail call optimizations.
int internalPower_1 (int acc, int number, int degree) {
if (1 == degree)
return acc * number;
return internalPower_1(acc*number, number, degree - 1);
}
int Power_1 (int number, int degree) {
if (degree < 0)
throw std::invalid_argument("degree < 0");
return internalPower_1( 1, number, degree);
}
Now, would it not be nice if we could have integer types, which depended on the valid value range? Other languages have it (e.g. Common Lisp). Unless there is already something in boost (I did not check), we have to roll such a thing ourselves.
Code first, excuses later:
#include <iostream>
#include <stdexcept>
#include <limits>
#include <optional>
#include <string>
template <int MINVAL= std::numeric_limits<int>::min(),
int MAXVAL = std::numeric_limits<int>::max()>
struct Integer
{
int value;
static constexpr int MinValue() {
return MINVAL; }
static constexpr int MaxValue() {
return MAXVAL; }
using Class_t = Integer<MINVAL,MAXVAL>;
using Maybe_t = std::optional<Class_t>;
// Values passed in during run time get handled
// and punished at run time.
// No way to work around this, because we are
// feeding our thing of beauty from the nasty
// outside world.
explicit Integer (int v)
: value{v}
{
if (v < MINVAL || v > MAXVAL)
throw std::invalid_argument("Value out of range.");
}
static Maybe_t Init (int v) {
if (v < MINVAL || v > MAXVAL) {
return std::nullopt;
}
return Maybe_t(v);
}
};
using UInt = Integer<0>;
using Int = Integer<>;
std::ostream& operator<< (std::ostream& os,
const typename Int::Maybe_t & v) {
if (v) {
os << v->value;
} else {
os << std::string("NIL");
}
return os;
}
template <class T>
auto operator* (const T& x,
const T& y)
-> T {
if (x && y)
return T::value_type::Init(x->value * y->value);
return std::nullopt;
}
Int::Maybe_t internalPower_3 (const Int::Maybe_t& acc,
const Int::Maybe_t& number,
const UInt::Maybe_t& degree) {
if (!acc) return std::nullopt;
if (!degree) return std::nullopt;
if (1 == degree->value) {
return Int::Init(acc->value * number->value);
}
return internalPower_3(acc * number,
number,
UInt::Init(degree->value - 1));
}
Int::Maybe_t Power_3 (const Int::Maybe_t& number,
const UInt::Maybe_t& degree) {
if (!number) return std::nullopt;
return internalPower_3 (Int::Init(1),
number,
degree);
}
int main (int argc, const char* argv[]) {
std::cout << Power_1 (2, 3) << std::endl;
std::cout << Power_3 (Int::Init(2),
UInt::Init(3)) << std::endl;
std::cout << Power_3 (Int::Init(2),
UInt::Init(-2)) << std::endl;
std::cout << "UInt min value = "
<< UInt::MinValue() << std::endl
<< "Uint max value = "
<< UInt::MaxValue() << std::endl;
return 0;
}
The key here is, that the function Int::Init() returns Int::Maybe_t. Thus, before the error can propagate, the user gets a std::nullopt very early, if they try to init with a value which is out of range. Using the constructor of Int, instead would result in an exception.
In order for the code to be able to check, both signed and unsigned instances of the template (e.g. Integer<-10,10> or Integer<0,20>) use a signed int as storage, thus being able to check for invalid values, sneaking in via implicit type conversions. At the expense, that our unsigned on a 32 bit system would be only 31 bit...
What this code does not show, but which could be nice, is the idea, that the resulting type of an operation with two (different instances of) Integers, could be yet another different instance of Integer. Example: auto x = Integer<0,5>::Init(3) - Integer<0,5>::Init(5) In our current implementation, this would result in a nullopt, preserving the type Integer<0,5>. In a maybe better world, though it would as well be possible, that the result would be an Integer<-2,5>.
Anyway, as it is, some might find my little Integer<,> experiment interesting. After all, using types to be more expressive is good, right? If you write a function like typename Integer<-10,0>::Maybe_t foo(Integer<0,5>::Maybe_t x) is quite self explaining as to which range of values are valid for x.
I am debating whether I can get rid of a compiler warning or not. The warning comes from comparing an uint32 to -1.
Now just from glancing at it this seems like an unwise thing to do since uint32 should never be negative yet I didn't write this code and am not as familiar with the c++ way of doing things, so I am asking you. Here is some example code to illustrate what's happening.
bool isMyAddressValid = false;
unsigned int myAddress(-1);
unsigned int translatedAddress;
if(isMyAddressValid)
{
translatedAddress = 500;
}
else
{
translatedAddress = -1;
}
myAddress = translatedAddress;
if(myAddress == -1)
{
std::cout << "ERROR OCCURED";
}
else
{
std::cout << "SUCCESS";
}`
So is this valid code? Is this come Cism that I do not properly understand?
Setting an unsigned type to -1 is the idiomatic way of setting it to its maximum possible value, regardless of the number of bits in the type.
A clumsier but perhaps clearer way would be to write
translatedAddresss = std::numeric_limits<decltype(translatedAddresss)>::max();
If it is in your arsenal of libraries, I would use std::optional or boost::optional
The code is valid according to the standard, both assignment and equality check applies integer conversions on its operands. It is, however, a C-ism to use a sentinel value, I would use an exception instead.
I'm trying to validate user input for integers only. My code works fine except when the user inputs 0. It doesn't consider it an integer, and it thinks the value is false. Here's a brief example of how I'm coding this project....
int main ()
{
int num;
cout << "Please enter an integer: ";
cin >> num;
cout << endl;
while (! num)
{
cout << "That is not an integer.\n";
return;
}
}
If the user inputs 0, I get sent into the while loop even though 0 is an integer.
The expression !num is true if, and only if, num is 0. So your implementation is buggy.
The easiest thing to do is to use something like
if (!(std::cin >> num)){
std::cout << "That is not an integer.\n";
}
If you want to validate the input yourself then consider reading in a std::string and checking if that can be converted to an integer. This is surprisingly non-trivial since the possible values an int can take are so platform dependent (some systems have a range as small as -32767 to +32767).
If you can, use boost::lexical_cast<int>(num); where I've upgraded num to a std::string type. (The header you need is <boost/lexical_cast.hpp>).
In c++ the built-in int type doesn't have any optional validity to it like it does in some other languages: "!num" just means "num==0" and not "num does not exist".
However C++17 has std::optional template, which can turn your vanilla int into exactly what you originally expected!
All you need is some simple template magic to make it work with istream:
template< class CharT, class Traits, class T >
basic_istream<CharT,Traits>& operator>>( basic_istream<CharT,Traits>&& st,
std::optional<T>& value )
{
T res;
st>>res;
value = (st)?res:{};
return st;
}
Come to think of it, STL should provide that overload from the box.
Now all you need is to replace your int num with std::optional<int> num and voila - your code works as intended.
Seriously though, just use Bathsheba's solution.