I am debating whether I can get rid of a compiler warning or not. The warning comes from comparing an uint32 to -1.
Now just from glancing at it this seems like an unwise thing to do since uint32 should never be negative yet I didn't write this code and am not as familiar with the c++ way of doing things, so I am asking you. Here is some example code to illustrate what's happening.
bool isMyAddressValid = false;
unsigned int myAddress(-1);
unsigned int translatedAddress;
if(isMyAddressValid)
{
translatedAddress = 500;
}
else
{
translatedAddress = -1;
}
myAddress = translatedAddress;
if(myAddress == -1)
{
std::cout << "ERROR OCCURED";
}
else
{
std::cout << "SUCCESS";
}`
So is this valid code? Is this come Cism that I do not properly understand?
Setting an unsigned type to -1 is the idiomatic way of setting it to its maximum possible value, regardless of the number of bits in the type.
A clumsier but perhaps clearer way would be to write
translatedAddresss = std::numeric_limits<decltype(translatedAddresss)>::max();
If it is in your arsenal of libraries, I would use std::optional or boost::optional
The code is valid according to the standard, both assignment and equality check applies integer conversions on its operands. It is, however, a C-ism to use a sentinel value, I would use an exception instead.
Related
I am getting some -Wnarrowing conversion errors when doubles are narrowed to floats. How can I do this in a well defined way? Preferably with an option in a template I can toggle to switch behavior from throwing exceptions, to clamping to the nearest value, or to simple truncation. I was looking at the gsl::narrow cast, but it seems that it just performs a static cast under the hood and a comparison follow up: Understanding gsl::narrow implementation. I would like something that is more robust, as according to What are all the common undefined behaviours that a C++ programmer should know about? static_cast<> is UB if the value is unpresentable in the target type. I also really liked this implementation, but it also relies on a static_cast<>: Can a static_cast<float> from double, assigned to double be optimized away? I do not want to use boost for this. Are there any other options? It's best if this works in c++03, but c++0x(experimental c++11) is also acceptable... or 11 if really needed...
Because someone asked, here's a simple toy example:
#include <iostream>
float doubleToFloat(double num) {
return static_cast<float>(num);
}
int main( int, char**){
double source = 1; // assume 1 could be any valid double value
try{
float dest = doubleToFloat(source);
std::cout << "Source: (" << source << ") Dest: (" << dest << ")" << std::endl;
}
catch( std::exception& e )
{
std::cout << "Got exception error: " << e.what() << std::endl;
}
}
My primary interest is in adding error handling and safety to doubleToFloat(...), with various custom exceptions if needed.
As long as your floating-point types can store infinities (which is extremely likely), there is no possible undefined behavior. You can test std::numeric_limits<float>::has_infinity if you really want to be sure.
Use static_cast to silence the warning, and if you want to check for an overflow, you can do something like this:
template <typename T>
bool isInfinity(T f) {
return f == std::numeric_limits<T>::infinity()
|| f == -std::numeric_limits<T>::infinity();
}
float doubleToFloat(double num) {
float result = static_cast<float>(num);
if (isInfinity(result) && !isInfinity(num)) {
// overflow happened
}
return result;
}
Any double value that doesn't overflow will be converted either exactly or to one of the two nearest float values (probably the nearest). You can explicitly set the rounding direction with std::fesetround.
It depends on what you mean by "safely". There will most likely be a drop of precision in most cases. Do you want detect if this happens? Assert, or just know about it and notify the user?
A possible path of solution would be to static casting the double to a float, and then back to double, and compare the before and after. Equality is unlikely, but you could assert that the loss of precision is within your tolerance.
float doubleToFloat(double a_in, bool& ar_withinSpec, double a_tolerance)
{
auto reducedPrecision = static_cast<float>(a_in);
auto roundTrip = static_cast<double>(reducedPrecision);
ar_withinSpec = (roundTrip < a_tolerance);
return reducedPrecision;
}
I had this code as part of a C++ project
unsigned int fn() {
//do some computations
int value = ....
if(value <= 0)
return 0;
return (unsigned int)value;
}
I don't get the need to use the cast at the last return statement since all negative numbers will be caught in the if statement(hence returning 0).
And moreover this function fn is called from another function (whose return type is int) that simply returns the value returned by fn.
Thanks..
The purpose is to silence the compiler warning that could otherwise be issued.
I think that it really changes nothing
I personally run the same code for different scenarios and it appears the last cast can be done away with.
Simply stated, why does the following code compile?
#include<iostream>
int foo(){
return 0,1;
}
int main(){
int a,b = foo();
std::cout << a << " " << b << std::endl;
std::cout << foo();
return 0;
}
I am using a 64-bit Window's machine, and compiling with Dev-C++ (using the MinGW GCC 4.5.2 32-bit compiler).
This code prints the following output:
2686824 1
1
I strongly suspect that the value contained in a is the usual garbage stored in uninitialized variables.
It is clear from this question that returning multiple values from functions is not common practice in industry, and it is definitely discouraged and penalized in programming courses taught at academic institutions.
So why does it work? As I've matured as a programmer, I've realized the incredible value of compiler errors, which are infinitely more intelligible than linker or run-time errors, and obviously way better than bugs.
To be clear, I'm more interested in why this has been allowed from a language design perspective, rather than the technical specifics of how the compiler does its thing (unless, in this case, implementation realities or technical consequences have made it difficult/impossible to detect/manage multiple return variables).
Are there some esoteric cases where multi-variable return statments were deemed useful?
You still only return one value, which is 1.
return 0,1;
makes use of the comma operator, its result is its right hand side. With the right warning level, your compiler (gcc and clang at least) will warn because its left hand side has no effect.
If you actually want to return multiple values, you can return a std::tuple (or std::pair):
auto fun () {
return std::make_tuple(0,1);
}
int main () {
int a, b;
std::tie(a,b) = fun();
// Now a == 0, b == 1
}
Equivalent alternative as of C++17:
auto fun () {
return std::make_tuple(0,1);
}
int main () {
auto [a,b] = fun();
}
Live
0,1 is an operator that does nothing except evaluate both expressions in order and results in the value of the second operator (1). So return 0,1 is equivalent to saying return 1.
int a,b = foo(); creates two variables. a is uninitialized, and b is assigned the return value of foo().
That's why the value of b is 1 and the value of a is wacky (it's undefined).
IME, the main benefit of the comma operator is for loops in C. For example, suppose you wanted to enumerate elements of a linked list. Lists don't have a built-in concept of order so you end up keeping two trackers of position in the list.
for (index = 0, curr_node = list_begin;
curr_node != NULL;
++index, curr_node = curr_node->next)
{
// Stuff
}
It's a convenient little trick for readability and it also guarantees that index/curr_node are in sync no matter what you do with break or continue.
I am trying to use implement the LSB lookup method suggested by Andrew Grant in an answer to this question: Position of least significant bit that is set
However, it's resulting in a segmentation fault. Here is a small program demonstrating the problem:
#include <iostream>
typedef unsigned char Byte;
int main()
{
int value = 300;
Byte* byteArray = (Byte*)value;
if (byteArray[0] > 0)
{
std::cout<< "This line is never reached. Trying to access the array index results in a seg-fault." << std::endl;
}
return 0;
}
What am I doing wrong?
I've read that it's not good practice to use 'C-Style' casts in C++. Should I use reinterpret_cast<Byte*>(value) instead? This still results in a segmentation fault, though.
Use this:
(Byte*) &value;
You don't want a pointer to address 300, you want a pointer to where 300 is stored. So, you use the address-of operator & to get the address of value.
While Erik answered your overall question, as a followup I would say emphatically -- yes, reinterpret_cast should be used rather than a C-style cast.
Byte* byteArray = reinterpret_cast<Byte*>(&value);
The line should be:
Byte* byteArray = (Byte*)&value;
You should not have to put the (void *) in front of it.
-Chert
char *array=(char*)(void*)&value;
Basically you take a pointer to the beginning of the string and recast it to a pointer to a byte.
#Erik already fixed your primary problem, but there is a subtle one that you still have. If you are only looking for the least significant bit, there is no need to bother with the cast at all.
int main()
{
int value = 300;
if (value & 0x00000001)
{
std::cout<< "LSB is set" << std::endl;
}
return 0;
}
I have a code in which user has to pass > 0 number otherwise this code will throw. Using type of this arg as std::size_t doesn't work for the reason that negative numbers give large positive numbers. Is it good practice if I use signed type or are there other ways to enforce it?
void f(std::size_t number)
{
//if -1 is passed I'm (for obvious reason) get large positive number
}
I don't think there's a definitive correct answer to this question. You can take an look to Scott Meyers's opinion on the subject :
One problem is that unsigned types
tend to decrease your ability to
detect common programming errors.
Another is that they often increase
the likelihood that clients of your
classes will use the classes
incorrectly.
In the end, the question to ask is really : do you need the extra possible values provided by the unsigned type ?
A lot depends on what type of argument you imagine your clients trying to pass. If they're passing int, and that's clearly big enough to hold the range of values you're going to use, then there's no practical advantage to using std::size_t - it won't enforce anything, and the way the issue manifests as an apparently huge number is simply more confusing.
BUT - it is good to use size_t anyway as it helps document the expectations of the API.
You clearly can't do a compile-time check for "> 0" against a run-time generated value, but can at least disambiguate negative inputs from intentional huge numbers ala
template <typename T>
void f(T t)
{
if (!(t > 0))
throw std::runtime_error("be positive!");
// do stuff with t, knowing it's not -1 shoehorned into size_t...
...
}
But, if you are really concerned about this, you could provide overloads:
// call me please e.g. f(size_t(10));
void f(size_t);
// unimplemented (and private if possible)...
// "want to make sure you realise this is unsigned: call f(size_t) explicitly
void f(int32_t);
void f(int64_t);
...then there's a compile-time error leading to the comments re caller explicitly providing a size_t argument (casting if necessary). Forcing the client to provide an arg of size_t type is a pretty good way to make sure they're conscious of the issue.
Rin's got a good idea too - would work really well where it works at all (depends on there being an signed int type larger than size_t). Go check it out....
EDIT - demonstration of template idea above...
#include <iostream>
template <typename T>
void f(T t)
{
if (!(t > 0))
std::cout << "bad call f(" << (int)t << ")\n";
else
std::cout << "good f(" << (int)t << ")\n";
}
int main()
{
f((char)-1);
f((unsigned char)255);
}
I had the same problems you're having: Malfunctioning type-casting of string to unsigned int
Since, in my case, I'm getting the input from the user, my approach was to read the data as a string and check its contents.
template <class T>
T getNumberInput(std::string prompt, T min, T max) {
std::string input;
T value;
while (true) {
try {
std::cout << prompt;
std::cin.clear();
std::getline(std::cin, input);
std::stringstream sstream(input);
if (input.empty()) {
throw EmptyInput<std::string>(input);
} else if (input[0] == '-' && std::numeric_limits<T>::min() == 0) {
throw InvalidInput<std::string>(input);
} else if ((sstream >> value) && (value >= min) && (value <= max)) {
std::cout << std::endl;
return value;
} else {
throw InvalidInput<std::string>(input);
}
} catch (EmptyInput<std::string> & emptyInput) {
std::cout << "O campo não pode ser vazio!\n" << std::endl;
} catch (InvalidInput<std::string> & invalidInput){
std::cout << "Tipo de dados invãlido!\n" << std::endl;
}
}
}
If your allowed value range for number allows it use the signed std::ptrdiff_t (like Alexey said).
Or use a library like SafeInt and have f declared something like this: void f( SafeInt< std::size_t > i ); which throws if you'll call it with something like f( -1 );.
Fisrt solution
void f(std::ptrdiff_t number) {
if (number < 0) throw;
}
Second solution
void f(std::size_t number) {
if (number > std::numeric_limits<std::size_t>::max()/2) throw;
}
Maybe you should wrap read-function to another function which purpose will be get int and validate it.
EDIT: Ok int was just first idea, so read and parse string
This is one of the situations where you cannot really do much. The compiler usually gives out a warning when converting signed to unsigned data-types, so you will have to trust the caller to heed that warning.
You could test this using bitwise operation, such as the following:
void f(std::size_t number)
{
if(number & (0x1L << (sizeof(std::size_t) * 8 - 1)) != 0)
{
// high bit is set. either someone passed in a negative value,
// or a very large size that we'll assume is invalid.
// error case goes here
}
else
{
// valid value
}
}
This code assumes 8-bit bytes. =)
Yes, large values will fail, but you could document that they are not allowed, if you really need to protect against this.
Who is using this API? Rather than using a solution like this, I would recommend that they fix their code. =) I would think that the "normal" best practice would be for callers to use size_t, and have their compilers complain loudly if they try to put signed values into them.
I had to think about this question a little, and this is what I would do.
If your function has the responsability to throw an exception if you pass a negative number, then your function's signature should accept a signed integer number. That's because if you accept an unsigned number, you won't ever be able to tell unambiguosly if a number is negative and you won't be able to throw an exception. IOW, you want complain to your assignment of throwing exception.
You should establish what is an acceptable input range and use a signed integer large enough to fully contain that range.