Do datatypes exist in C++ with range
1 <= N <= 1018
0 <= K <= 1018
If not, is there anyway to restrict a variables input range?
Since 1018 < 264, unsigned long long will be big enough to hold values in your requested ranges.
Regarding "restricting a variable's input range", it's not clear what kind of restrictions you have in mind:
Do you want functionality such that values declared outside the range fail to compile?
Do you want functionality such that if value is computed that falls outside that range, some special action (such as crashing the program, or throwing an exception, or printing an error message) occurs?
Or are you looking for a datatype that clamps "out of range" values to the nearest value in the range?
Or are you looking for a datatype that acts like C++'s build-in unsigned datatypes, where overflow (and underflow) cause a modulo-style "wrap-around" in the represented value?
Some of those handling strategies could be implemented via a custom class (at the cost of a certain amount of efficiency). If you don't need any particular error-checking for out-of-range values, OTOH, then plain old unsigned long long will work fine and be most efficient as it maps directly to the underlying CPU hardware.
If you need to limit the range you can wrap the variable in a class. You can overload the operators so that you can do arithmetic with the value as you would normally do.
Usually I would resort to templates to implement such functionality but I think the example below is easier to understand.
class MyInt
{
public:
MyInt(int minval, int maxval);
MyInt& operator=(MyInt const& rhs);
MyInt& operator=(int rhs);
private:
int _val;
int _minval;
int _maxval;
bool _is_inrange(int val);
};
Whenever you perform an operation on the class it needs to check whether the result is in the correct range. Of course the data type you base your class on needs to be able to accomodate the desired value range. I used int as an example. If you use templates you could set select the base data type when instantiating the class.
MyInt::MyInt(int minval, int maxval)
{
_minval = minval;
_maxval = maxval;
}
bool MyInt::_is_inrange(int val)
{
return _minval <= val && val < _maxval;
}
You can overload the operators to work with the values in the same way you are working with primitive datatypes. I have implemented the assignment operator as an example but you can also overload the arithmetic operators.
MyInt& MyInt::operator=(int rhs)
{
if (_is_inrange(rhs))
{
_val = rhs;
}
else
{
// throw an error or do something else.
cout << "Error: Invalid value" << endl;
}
return *this;
}
MyInt& MyInt::operator=(MyInt const& rhs)
{
if (_is_inrange(rhs._val))
{
_val = rhs._val;
}
else
{
// throw an error or do something else.
cout << "Error: Invalid value" << endl;
}
return *this;
}
And finally here is how you would use that class in a program.
int main()
{
MyInt custom_int(0, 10);
cout << "Assigning valid value..." << endl;
custom_int = 9;
cout << "Assigning invalid value..." << endl;
custom_int = 10;
return 0;
}
Related
I am getting some -Wnarrowing conversion errors when doubles are narrowed to floats. How can I do this in a well defined way? Preferably with an option in a template I can toggle to switch behavior from throwing exceptions, to clamping to the nearest value, or to simple truncation. I was looking at the gsl::narrow cast, but it seems that it just performs a static cast under the hood and a comparison follow up: Understanding gsl::narrow implementation. I would like something that is more robust, as according to What are all the common undefined behaviours that a C++ programmer should know about? static_cast<> is UB if the value is unpresentable in the target type. I also really liked this implementation, but it also relies on a static_cast<>: Can a static_cast<float> from double, assigned to double be optimized away? I do not want to use boost for this. Are there any other options? It's best if this works in c++03, but c++0x(experimental c++11) is also acceptable... or 11 if really needed...
Because someone asked, here's a simple toy example:
#include <iostream>
float doubleToFloat(double num) {
return static_cast<float>(num);
}
int main( int, char**){
double source = 1; // assume 1 could be any valid double value
try{
float dest = doubleToFloat(source);
std::cout << "Source: (" << source << ") Dest: (" << dest << ")" << std::endl;
}
catch( std::exception& e )
{
std::cout << "Got exception error: " << e.what() << std::endl;
}
}
My primary interest is in adding error handling and safety to doubleToFloat(...), with various custom exceptions if needed.
As long as your floating-point types can store infinities (which is extremely likely), there is no possible undefined behavior. You can test std::numeric_limits<float>::has_infinity if you really want to be sure.
Use static_cast to silence the warning, and if you want to check for an overflow, you can do something like this:
template <typename T>
bool isInfinity(T f) {
return f == std::numeric_limits<T>::infinity()
|| f == -std::numeric_limits<T>::infinity();
}
float doubleToFloat(double num) {
float result = static_cast<float>(num);
if (isInfinity(result) && !isInfinity(num)) {
// overflow happened
}
return result;
}
Any double value that doesn't overflow will be converted either exactly or to one of the two nearest float values (probably the nearest). You can explicitly set the rounding direction with std::fesetround.
It depends on what you mean by "safely". There will most likely be a drop of precision in most cases. Do you want detect if this happens? Assert, or just know about it and notify the user?
A possible path of solution would be to static casting the double to a float, and then back to double, and compare the before and after. Equality is unlikely, but you could assert that the loss of precision is within your tolerance.
float doubleToFloat(double a_in, bool& ar_withinSpec, double a_tolerance)
{
auto reducedPrecision = static_cast<float>(a_in);
auto roundTrip = static_cast<double>(reducedPrecision);
ar_withinSpec = (roundTrip < a_tolerance);
return reducedPrecision;
}
I am sure this question has been asked already but I couldn't find the answer.
If I have a function, let's say:
int Power(int number, int degree){
if(degree==0){
return 1;
}
return number*Power(number, degree-1);
}
It works only when the degree is a non-negative int. How can I prevent this function from being called with wrong parameters?
For example, if the programmer writes cout<<Power(2, -1);, I want the compiler to refuse to compile the code and return some kind of an error message (e.g. "function Power accepts only non-negative integers").
Another alternative would be for the function to not return any value in this case. For example:
int Power(int number, unsigned int degree){
if(degree<0){
//return nothing
}
if(degree==0){
return 1;
}
return number*Power(number, degree-1);
}
There is an alternative to returning a value: Throw a value. A typical example:
if(degree<0){
throw std::invalid_argument("degree may not be negative!");
}
I want the compiler to refuse to compilate the code
In general, arguments are unknown until runtime, so this is not typically possible.
Your answer does the job for menicely. But I am curious: 'throw' terminates the program and prevents anything after Power() to be executed.
If you catch the thrown object, then you can continue immediately after the function from which the object was thrown.
The mere fact, that C++ does implicit type conversions, leaves you no way out of the predicament, that if you write unsigned int x = -1;, no matter which warnings you turn on with your compiler, you won't see any problem with that.
The only rule coming to mind, which might help you with that, is the notorious "max zero or one implicit conversions" rule. But I doubt it can be exploited in this case. (-1 would need to be converted to unsigned int, then to another type, implicitly). But I think from what I read on the page I linked above, numeric implicit conversions do not really count under some circumstances.
This leaves you but one other, also imperfect option. In the code below, I outline the basic idea. But there is endless room to refine the idea (more on that, later). This option is to resort to optional types in combination with your own integer type. The code below also only hints to what is possible. All that could be done in some fancy monadic framework or whatnot...
Obviously, in the code, posted in the question, it is a bad idea to have argument degree as an unsigned int, because then, a negative value gets implicitly converted and the function cannot protect itself from the hostile degree 0xFFFFFFFF (max value of unsigned int). If it wanted to check, it had better chosen int. Then it could check for negative values.
The code in the question also calls for a stack overflow, given it does not implement power in a tail recursive way. But this is just an aside and not subject to the question at hand. Let's get that one quickly out of the way.
// This version at least has a chance to benefit from tail call optimizations.
int internalPower_1 (int acc, int number, int degree) {
if (1 == degree)
return acc * number;
return internalPower_1(acc*number, number, degree - 1);
}
int Power_1 (int number, int degree) {
if (degree < 0)
throw std::invalid_argument("degree < 0");
return internalPower_1( 1, number, degree);
}
Now, would it not be nice if we could have integer types, which depended on the valid value range? Other languages have it (e.g. Common Lisp). Unless there is already something in boost (I did not check), we have to roll such a thing ourselves.
Code first, excuses later:
#include <iostream>
#include <stdexcept>
#include <limits>
#include <optional>
#include <string>
template <int MINVAL= std::numeric_limits<int>::min(),
int MAXVAL = std::numeric_limits<int>::max()>
struct Integer
{
int value;
static constexpr int MinValue() {
return MINVAL; }
static constexpr int MaxValue() {
return MAXVAL; }
using Class_t = Integer<MINVAL,MAXVAL>;
using Maybe_t = std::optional<Class_t>;
// Values passed in during run time get handled
// and punished at run time.
// No way to work around this, because we are
// feeding our thing of beauty from the nasty
// outside world.
explicit Integer (int v)
: value{v}
{
if (v < MINVAL || v > MAXVAL)
throw std::invalid_argument("Value out of range.");
}
static Maybe_t Init (int v) {
if (v < MINVAL || v > MAXVAL) {
return std::nullopt;
}
return Maybe_t(v);
}
};
using UInt = Integer<0>;
using Int = Integer<>;
std::ostream& operator<< (std::ostream& os,
const typename Int::Maybe_t & v) {
if (v) {
os << v->value;
} else {
os << std::string("NIL");
}
return os;
}
template <class T>
auto operator* (const T& x,
const T& y)
-> T {
if (x && y)
return T::value_type::Init(x->value * y->value);
return std::nullopt;
}
Int::Maybe_t internalPower_3 (const Int::Maybe_t& acc,
const Int::Maybe_t& number,
const UInt::Maybe_t& degree) {
if (!acc) return std::nullopt;
if (!degree) return std::nullopt;
if (1 == degree->value) {
return Int::Init(acc->value * number->value);
}
return internalPower_3(acc * number,
number,
UInt::Init(degree->value - 1));
}
Int::Maybe_t Power_3 (const Int::Maybe_t& number,
const UInt::Maybe_t& degree) {
if (!number) return std::nullopt;
return internalPower_3 (Int::Init(1),
number,
degree);
}
int main (int argc, const char* argv[]) {
std::cout << Power_1 (2, 3) << std::endl;
std::cout << Power_3 (Int::Init(2),
UInt::Init(3)) << std::endl;
std::cout << Power_3 (Int::Init(2),
UInt::Init(-2)) << std::endl;
std::cout << "UInt min value = "
<< UInt::MinValue() << std::endl
<< "Uint max value = "
<< UInt::MaxValue() << std::endl;
return 0;
}
The key here is, that the function Int::Init() returns Int::Maybe_t. Thus, before the error can propagate, the user gets a std::nullopt very early, if they try to init with a value which is out of range. Using the constructor of Int, instead would result in an exception.
In order for the code to be able to check, both signed and unsigned instances of the template (e.g. Integer<-10,10> or Integer<0,20>) use a signed int as storage, thus being able to check for invalid values, sneaking in via implicit type conversions. At the expense, that our unsigned on a 32 bit system would be only 31 bit...
What this code does not show, but which could be nice, is the idea, that the resulting type of an operation with two (different instances of) Integers, could be yet another different instance of Integer. Example: auto x = Integer<0,5>::Init(3) - Integer<0,5>::Init(5) In our current implementation, this would result in a nullopt, preserving the type Integer<0,5>. In a maybe better world, though it would as well be possible, that the result would be an Integer<-2,5>.
Anyway, as it is, some might find my little Integer<,> experiment interesting. After all, using types to be more expressive is good, right? If you write a function like typename Integer<-10,0>::Maybe_t foo(Integer<0,5>::Maybe_t x) is quite self explaining as to which range of values are valid for x.
Disclaimer - this is a school assignment, however the problem is still interesting I hope!
I have implemented a custom class called Vector<bool>, which stores the bool entries as bits in an array of numbers.
Everything has gone fine except for implementing this:
bool& operator[](std::size_t index) {
validate_bounds(index);
???
}
The const implementation is quite straight forward, just reading out the value. Here however I can't really understand what to do, and the course is a specialization course on C++ so I'm guessing I should do some type-deffing or something. The data is represented by an array of type unsigned int and should be dynamic (e.g. push_back(bool value) should be implemented).
I solved this implementing a proxy class:
class BoolVectorProxy {
public:
explicit BoolVectorProxy(unsigned int& reference, unsigned char index) {
this->reference = &reference;
this->index = index;
}
void operator=(const bool v) {
if (v) *reference |= 1 << index;
else *reference &= ~(1 << index);
}
operator bool() const {
return (*reference >> index) & 1;
}
private:
unsigned int* reference;
unsigned char index;
};
And inside the main class:
BoolVectorProxy operator[](std::size_t index) {
validate_bound(index);
return BoolVectorProxy(array[index / BLOCK_CAPACITY], index % BLOCK_CAPACITY);
}
I also use Catch as a testing library, the code passes this test:
TEST_CASE("access and assignment with brackets", "[Vector]") {
Vector<bool> a(10);
a[0] = true;
a[0] = false;
REQUIRE(!a[0]);
a[1] = true;
REQUIRE(a[1]);
const Vector<bool> &b = a;
REQUIRE(!b[0]);
REQUIRE(b[1]);
a[0] = true;
REQUIRE(a[0]);
REQUIRE(b[0]);
REQUIRE(b.size() == 10);
REQUIRE_THROWS(a[-1]);
REQUIRE_THROWS(a[10]);
REQUIRE_THROWS(b[-1]);
REQUIRE_THROWS(b[10]);
}
If anyone finds any issues or improvements that can be made, please comment, thanks!
Basically implementing operator[] is the same as implementing const operator[] as you might expect, it's just that one is writable (lvalue) and the other is read only (rvalue).
I think you've got a understanding of the problem : you can convert an unsigned int into a bool using bitwise operations, and you can also say "if the nth bool is modified in X, do a bitwise operation with X and it's done !". But this operator means : I want a lvalue of the bool so I can modify it whenever I want and have an impact on the integer associated. It means that you want a reference of a bool, or in your case a reference of a single bit, so you can modify that bit on the fly. Unfortunately you can't reference a single bit, the smallest you can do is a whole byte (with char), so you would have to take a chunk of at least 7 other booleans with you. That's not what you want.
That being said, I understand that it might be for your assignment, but converting bools into multiple unsigned int is more like useless C optimization to me. You would be better with having a single array of bools (C-style), and doing the memory handling manually, because that is almost what you are doing. Plus with that method, you would actually be able to reference one single boolean (and be able to modify it) without touching the others. Is it mandatory that you have to use an array of unsigned int for this assignment ?
I would expect the following code to produce equality, but bool values are shown to be different.
#include <iostream>
union crazyBool
{
unsigned char uc;
bool b;
};
int main()
{
crazyBool a, b;
a.uc = 1;
b.uc = 5;
if(a.b == b.b)
{
std::cout << "==" << std::endl;
}
else
{
std::cout << "!=" << std::endl;
}
bool x, y;
void *xVP = &x, *yVP = &y;
unsigned char *xP = static_cast<unsigned char*>(xVP);
unsigned char *yP = static_cast<unsigned char*>(yVP);
(*xP) = (unsigned char)1;
(*yP) = (unsigned char)5;
if(x == y)
{
std::cout << "==" << std::endl;
}
else
{
std::cout << "!=" << std::endl;
}
return 0;
}
Note that here we are not only changing the value through union (which was pointed out as being undefined), but also accessing memory directly via void pointer.
If you assign to a member of a union, that member becomes the active member.
Reading any member other than the active one is undefined behavior, meaning that anything could happen. There are a few specific exceptions to this rule, depending on the version of the C++ standard your compiler is following; but none of these exceptions apply in your case.
(One of the exceptions is that if the union has multiple members of class type where the classes have the same initial members, these shared members can be read through any of the union's members.)
Edit to address the question clarification:
The standard defines bool as having the values true and false, no others. (C++11, 3.9.1/6) It never defines what it means to copy some other bit pattern over the storage (which is what you are doing). Since it doesn't define it, the behavior is undefined too.
Yes, when converting an integer to a bool 0 becomes false and anything else becomes true, but that is just a conversion rule, not a statement about bool's representation.
I know this is a very noob-ish question,but how do I define an integer's interval?
If I want an integer X to be 56<= X <=1234 , how do I declare X ?
The best way would be to create your own integer class with bounds on it and overloaded operators like +, * and == basically all the ops a normal integer can have. You will have to decide the behavior when the number gets too high or too low, I'll give you a start on the class.
struct mynum {
int value;
static const int upper = 100000;
static const int lower = -100000;
operator int() {
return value;
}
explicit mynum(int v) {
value=v;
if (value > upper)value=upper;
if (value < lower)value=lower;
}
};
mynum operator +(const mynum & first, const mynum & second) {
return mynum(first.value + second.value);
}
There is a question on stackoverflow already like your question. It has a more complete version of what I was doing, it may be a little hard to digest for a beginner but it seems to be exactly what you want.