I'm using a C library that uses unsigned integers as index to some data. But sometimes, functions return those indices as signed in order to return -1 if the function fails to return an index.*
How do I prevent implicit conversion changes signedness warnings and instead, throw runtime errors if the conversion isn't possible? Would you recommend to wrap library functions to use exceptions for error handling and only return proper values?
Is there a standard way to do this:
#include <stdlib.h>
#include <errno.h>
#include <limits.h>
// pointless c function to demonstrate the question
// parse the string to an unsigned integer, return -1 on failure
int atoui(char const* str) {
char* pend;
long int li=strtol(str, &pend, 10);
if ( errno!=0 || *pend!='\0' || li<0 || li>INT_MAX ) {
return -1;
} else {
return li;
}
}
// --8<---
#include <stdexcept>
// How to do this properly?
unsigned int unsign(int i) {
if(i<0) {
throw std::runtime_error("Tried to cast negative int to unsigned int");
} else {
return static_cast<unsigned>(i);
}
}
int main() {
unsigned int j=unsign(atoui("42")); // OK
unsigned int k=unsign(atoui("-7")); // Runtime error
}
The standard library has no such function, but it's easy enough to write such a template:
template<typename SInt, typename = std::enable_if_t<std::is_integeral_v<SInt> && std::is_signed_v<SInt>>>
constexpr auto unsigned_cast(Sint i)
{
if(i < 0) throw std::domain_error("Outside of domain");
return static_cast<std::make_unsigned_t<SInt>>(i);
}
You can also return an optional if you don't like throwing exceptions for such trivial matters:
template<typename SInt, typename = std::enable_if_t<std::is_integeral_v<SInt> && std::is_signed_v<SInt>>>
constexpr std::optional<std::make_unsigned_t<SInt>> unsigned_cast_opt(Sint i)
{
if(i < 0) return std::nullopt;
return static_cast<std::make_unsigned_t<SInt>>(i);
}
If you want a range check at runtime (i.e. permitting the conversion between types iff the value held can be maintained), Boost has numeric_cast that achieves this.
And if you don't want to use Boost, your approach looks decent enough.
Edit: I missed that you were using C++, my previous answer assumed C only.
The simplest and most standard way is to use
std::optional<unsigned int> index;
instead of using -1 or some other sentinel value to represent invalid index. If the index is invalid, you just don't set the optional. Then you can query it with
index.has_value()
to find out if it is valid or not.
Related
I have a problem. I will use a hypothetical example as I do not think that I need to use my actual function as it is kind of complex. This is the example:
int GetNumberTimesTwo(int num)
{
return num * 2;
}
Now, let's assume that if the Number is bigger than two something bad happens. Is there any way how I can force num to be less or equal than 2? Of course, I could do
int GetNumberTimesTwo(int num)
{
if (num > 2)
return;
return num * 2;
}
The problem is that this would be annoying as it just prevents this from happening, but I would like to know about this error before compiling. Meaning, is there something like int num < 2 that I can do?
In my dreams, it could be done like that:
int GetNumberTimesTwo(int num < 2)
{
return num * 2;
}
But as I said, in my dreams, and because I know C++, I know that nothing ever works like I would like it to work, therefore I have to ask what the correct way to do this is.
C++ What would be the best way to set a maximum number for an integer in the function parameters
There are basically two alternatives:
Decide how to handle invalid input at runtime.
Accept a parameter type whose all possible values are valid, thereby making it impossible to pass invalid input.
For 1. there are many alternatives. Here are some:
Simply document the requirement as a precondition and violating the precondition results in undefined behaviour. Assume that input is always valid and don't check for invalid values. This is the most efficient choice, but least safe.
Throw an exception.
Return std::expected (proposed, not yet standard feature) or similar class which contains either result, or error state.
For 2. There are no trivial fool-proof choices. For a small set of valid values, an enum class is reasonably difficult to misuse:
enum class StrangeInput {
zero = 0,
one = 1,
two = 2,
};
int GetNumberTimesTwo(StrangeInput num)
There is no way the compiler knows how to validate this, and if it does what would be the error? stop compiling?
The problem is that you are passing a variable to the function, In the universe of possible values of ints there are for sure values greater than 2 so how would the compiler know that in your execution you are never passing that amount.
This is a typical runtime validation, you should be validating the preconditions in your function and handle these scenarios in case unexpected values comes.
With template and/or constexpr, you might have some kind of check, but requires value known at compile time and possibly restrict available functions to call inside the function:
template <int N>
int GetNumberTimesTwo()
{
static_assert(N <= 2);
return N * 2;
}
constexpr int GetNumberTimesTwo(int num)
{
// only code valid for constexpr
// the restrictions has changed with standard
if (num > 2) throw std::runtime_error("out of range");
return num * 2;
}
constexpr int ok = GetNumberTimesTwo(1);
constexpr int ko = GetNumberTimesTwo(42); // Compile time error
int no_check1 = GetNumberTimesTwo(1); // ok at runtime
int no_check2 = GetNumberTimesTwo(42); // throw at runtime
this is the way to go:
int GetNumberTimesTwo(int num)
{
if (num > 2)
{
return 0;
}
return num * 2;
}
or throw an error:
int GetNumberTimesTwo(int num)
{
if (num < 0)
{
throw std::invalid_argument("count cannot be negative!");
}
return num * 2;
}
I am sure this question has been asked already but I couldn't find the answer.
If I have a function, let's say:
int Power(int number, int degree){
if(degree==0){
return 1;
}
return number*Power(number, degree-1);
}
It works only when the degree is a non-negative int. How can I prevent this function from being called with wrong parameters?
For example, if the programmer writes cout<<Power(2, -1);, I want the compiler to refuse to compile the code and return some kind of an error message (e.g. "function Power accepts only non-negative integers").
Another alternative would be for the function to not return any value in this case. For example:
int Power(int number, unsigned int degree){
if(degree<0){
//return nothing
}
if(degree==0){
return 1;
}
return number*Power(number, degree-1);
}
There is an alternative to returning a value: Throw a value. A typical example:
if(degree<0){
throw std::invalid_argument("degree may not be negative!");
}
I want the compiler to refuse to compilate the code
In general, arguments are unknown until runtime, so this is not typically possible.
Your answer does the job for menicely. But I am curious: 'throw' terminates the program and prevents anything after Power() to be executed.
If you catch the thrown object, then you can continue immediately after the function from which the object was thrown.
The mere fact, that C++ does implicit type conversions, leaves you no way out of the predicament, that if you write unsigned int x = -1;, no matter which warnings you turn on with your compiler, you won't see any problem with that.
The only rule coming to mind, which might help you with that, is the notorious "max zero or one implicit conversions" rule. But I doubt it can be exploited in this case. (-1 would need to be converted to unsigned int, then to another type, implicitly). But I think from what I read on the page I linked above, numeric implicit conversions do not really count under some circumstances.
This leaves you but one other, also imperfect option. In the code below, I outline the basic idea. But there is endless room to refine the idea (more on that, later). This option is to resort to optional types in combination with your own integer type. The code below also only hints to what is possible. All that could be done in some fancy monadic framework or whatnot...
Obviously, in the code, posted in the question, it is a bad idea to have argument degree as an unsigned int, because then, a negative value gets implicitly converted and the function cannot protect itself from the hostile degree 0xFFFFFFFF (max value of unsigned int). If it wanted to check, it had better chosen int. Then it could check for negative values.
The code in the question also calls for a stack overflow, given it does not implement power in a tail recursive way. But this is just an aside and not subject to the question at hand. Let's get that one quickly out of the way.
// This version at least has a chance to benefit from tail call optimizations.
int internalPower_1 (int acc, int number, int degree) {
if (1 == degree)
return acc * number;
return internalPower_1(acc*number, number, degree - 1);
}
int Power_1 (int number, int degree) {
if (degree < 0)
throw std::invalid_argument("degree < 0");
return internalPower_1( 1, number, degree);
}
Now, would it not be nice if we could have integer types, which depended on the valid value range? Other languages have it (e.g. Common Lisp). Unless there is already something in boost (I did not check), we have to roll such a thing ourselves.
Code first, excuses later:
#include <iostream>
#include <stdexcept>
#include <limits>
#include <optional>
#include <string>
template <int MINVAL= std::numeric_limits<int>::min(),
int MAXVAL = std::numeric_limits<int>::max()>
struct Integer
{
int value;
static constexpr int MinValue() {
return MINVAL; }
static constexpr int MaxValue() {
return MAXVAL; }
using Class_t = Integer<MINVAL,MAXVAL>;
using Maybe_t = std::optional<Class_t>;
// Values passed in during run time get handled
// and punished at run time.
// No way to work around this, because we are
// feeding our thing of beauty from the nasty
// outside world.
explicit Integer (int v)
: value{v}
{
if (v < MINVAL || v > MAXVAL)
throw std::invalid_argument("Value out of range.");
}
static Maybe_t Init (int v) {
if (v < MINVAL || v > MAXVAL) {
return std::nullopt;
}
return Maybe_t(v);
}
};
using UInt = Integer<0>;
using Int = Integer<>;
std::ostream& operator<< (std::ostream& os,
const typename Int::Maybe_t & v) {
if (v) {
os << v->value;
} else {
os << std::string("NIL");
}
return os;
}
template <class T>
auto operator* (const T& x,
const T& y)
-> T {
if (x && y)
return T::value_type::Init(x->value * y->value);
return std::nullopt;
}
Int::Maybe_t internalPower_3 (const Int::Maybe_t& acc,
const Int::Maybe_t& number,
const UInt::Maybe_t& degree) {
if (!acc) return std::nullopt;
if (!degree) return std::nullopt;
if (1 == degree->value) {
return Int::Init(acc->value * number->value);
}
return internalPower_3(acc * number,
number,
UInt::Init(degree->value - 1));
}
Int::Maybe_t Power_3 (const Int::Maybe_t& number,
const UInt::Maybe_t& degree) {
if (!number) return std::nullopt;
return internalPower_3 (Int::Init(1),
number,
degree);
}
int main (int argc, const char* argv[]) {
std::cout << Power_1 (2, 3) << std::endl;
std::cout << Power_3 (Int::Init(2),
UInt::Init(3)) << std::endl;
std::cout << Power_3 (Int::Init(2),
UInt::Init(-2)) << std::endl;
std::cout << "UInt min value = "
<< UInt::MinValue() << std::endl
<< "Uint max value = "
<< UInt::MaxValue() << std::endl;
return 0;
}
The key here is, that the function Int::Init() returns Int::Maybe_t. Thus, before the error can propagate, the user gets a std::nullopt very early, if they try to init with a value which is out of range. Using the constructor of Int, instead would result in an exception.
In order for the code to be able to check, both signed and unsigned instances of the template (e.g. Integer<-10,10> or Integer<0,20>) use a signed int as storage, thus being able to check for invalid values, sneaking in via implicit type conversions. At the expense, that our unsigned on a 32 bit system would be only 31 bit...
What this code does not show, but which could be nice, is the idea, that the resulting type of an operation with two (different instances of) Integers, could be yet another different instance of Integer. Example: auto x = Integer<0,5>::Init(3) - Integer<0,5>::Init(5) In our current implementation, this would result in a nullopt, preserving the type Integer<0,5>. In a maybe better world, though it would as well be possible, that the result would be an Integer<-2,5>.
Anyway, as it is, some might find my little Integer<,> experiment interesting. After all, using types to be more expressive is good, right? If you write a function like typename Integer<-10,0>::Maybe_t foo(Integer<0,5>::Maybe_t x) is quite self explaining as to which range of values are valid for x.
I have a socket communication program. The protocol is that any error in writing is fatal, so the connection should be closed. My I/O code looks like this:
auto const toWrite = buf.size() * sizeof(buf[0]);
auto nWritten = ::write(fd, buf.data, toWrite);
if (toWrite != nWritten)
{
closeTheSocket();
}
This code gives warning: comparison between signed and unsigned integer expressions on the boolean test.
I understand the evils of greater/less comparisons on signed vs unsigned, but it's unavoidable here. The signature for the ::write system call is
#include <unistd.h>
ssize_t write(int fd, const void *buf, size_t count);
In other words, my toWrite variable is properly unsigned and the returned nWritten is signed (-1 indicates an error). I don't care; anything other than complete transfer is fatal to the connection. Also, I don't understand how an (in)equality test between signed/unsigned could be dangerous.
I've looked here, here, here, and here, but the questions are all about less-than comparisons, and the answers are all "don't do that".
This question asks about silencing the warning, but a sledgehammer "silence all signed/unsigned" comparisons is undesirable.
How should I silence just this warning in the least intrusive manner possible?
Separate the detection of the error condition from the detection of a incorrect length and use an explicit cast
if ( nWritten < 0 ||
static_cast<decltype(toWrite)>(nWritten) != toWrite )
{
// handle problems
}
Small edit: capture all negative values as errors for a wee bit of futureproofing.
If you can bare some template boilerplate, another possible solution is to write a function which treats each type in a different way:
#include <type_traits>
template <class A, class B>
constexpr bool are_different(A a, B b)
{
if constexpr (std::is_signed_v<A> and std::is_unsigned_v<B>)
{
if ( a < 0 )
return true;
else
return std::make_unsigned_t<A>(a) != b;
}
else if constexpr (std::is_unsigned_v<A> and std::is_signed_v<B>)
{
if ( b < 0 )
return true;
else
return a != std::make_unsigned_t<B>(b);
}
else
{
return a != b;
}
}
int main()
{
static_assert(are_different(1, 2));
static_assert(!are_different(1ull, 1));
static_assert(are_different(1, 2));
static_assert(are_different(1u, 2));
static_assert(are_different(1, 2u));
static_assert(are_different(-1, -1u));
static_assert(!are_different((long long)-1u, -1u));
}
Can anybody explain why does isdigit return 2048 if true? I am new to ctype.h library.
#include <stdio.h>
#include <ctype.h>
int main() {
char c = '9';
printf ("%d", isdigit(c));
return 0;
}
Because it's allowed to. The C99 standard says only this about isdigit, isalpha, etc:
The functions in this subclause return nonzero (true) if and only if the value of the
argument c conforms to that in the description of the function.
As to why that's happening in practice, I'm not sure. At a guess, it's using a lookup table shared with all the is* functions, and masking out all but a particular bit position. e.g.:
static const int table[256] = { ... };
// ... etc ...
int isalpha(char c) { return table[c] & 1024; }
int isdigit(char c) { return table[c] & 2048; }
// ... etc ...
Because there is no standard document to define how to represented bool by specified number, and for C language, non-zero is true and zero is false. so it depends on actual implementation .
Is there a general way to check for an overflow or an underflow of a given data type (uint32, int etc.)?
I am doing something like this:
uint32 a,b,c;
... //initialize a,b,c
if(b < c) {
a -= (c - b)
}
When I print a after some iterations, it displays a large number like: 4294963846.
To check for over/underflow in arithmetic check the result compared to the original values.
uint32 a,b;
//assign values
uint32 result = a + b;
if (result < a) {
//Overflow
}
For your specific the check would be:
if (a > (c-b)) {
//Underflow
}
I guess if I wanted to do that I would make a class that simulates the data type, and do it manually (which would be slow I would imagine)
class MyInt
{
int val;
MyInt(const int&nval){ val = nval;} // cast from int
operator int(){return val;} // cast to int
// then just overload ALL the operators... putting your check in
};
//typedef int sint32;
typedef MyInt sint32;
it can be more tricky than that, you might have to wind up using a define instead of a typedef...
I did a similar thing with pointers to check where memory was being written out side of bounds. very slow but did find where memory was being corrupted
Cert has a good reference for both signed integer overflow which is undefined behavior and unsigned wrapping which is not and they cover all the operators.
The document provides the following checking code for unsigned wrapping in subtraction using preconditions is as follows:
void func(unsigned int ui_a, unsigned int ui_b) {
unsigned int udiff;
if (ui_a < ui_b){
/* Handle error */
} else {
udiff = ui_a - ui_b;
}
/* ... */
}
and with post-conditions:
void func(unsigned int ui_a, unsigned int ui_b) {
unsigned int udiff = ui_a - ui_b;
if (udiff > ui_a) {
/* Handle error */
}
/* ... */
}
If you are gcc 5 you can use __builtin_sub_overflow:
__builtin_sub_overflow( ui_a, ui_b, &udiff )
Boost has a neat library called Safe Numerics. Depending on how you instantiate the safe template, the library will throw an exception when overflow or underflow has occurred. See https://www.boost.org/doc/libs/1_74_0/libs/safe_numerics/doc/html/index.html.
I'll put here another possible approach in case a bigger (x2 size) integer type is available. In that case it is possible to prevent the overflow from happening at the expense of a little more computation.
// https://gcc.godbolt.org/z/fh9G6Eeah
#include <exception>
#include <limits>
#include <iostream>
using integer_t = uint32_t; // The desired type
using bigger_t = uint64_t; // Bigger type
constexpr integer_t add(const integer_t a, const integer_t b)
{
static_assert(sizeof(bigger_t)>=2*sizeof(integer_t));
constexpr bigger_t SUP = std::numeric_limits<integer_t>::max();
constexpr bigger_t INF = std::numeric_limits<integer_t>::min();
// Using larger type for operation
bigger_t res = static_cast<bigger_t>(a) + static_cast<bigger_t>(b);
// Check overflows
if(res>SUP) throw std::overflow_error("res too big");
else if(res<INF) throw std::overflow_error("res too small");
// Back to the original type
return static_cast<integer_t>(res); // No danger of narrowing here
}
//---------------------------------------------------------------------------
int main()
{
std::cout << add(100,1) << '\n';
std::cout << add(std::numeric_limits<integer_t>::max(),1) << '\n';
}