C++ Boost Variant Error - c++

I have been playing around with boost variant and came across a scenario that seems problematic, but that I feel is my lack of knowledge on how properly use boost variant. Here is a little tester program I put together
main.cpp
#include <boost/variant.hpp>
#include <iostream>
#include <stdint.h>
typedef boost::variant<uint16_t, uint32_t> MyInt;
int main()
{
uint16_t regular = 11;
MyInt custom = regular;
std::cout << custom << '\n';
return 0;
}
Ok so the above works fine, but I get an error if I try to do the following:
int16_t invalid = 11;
MyInt custom = invalid; // This line causes the error.
I expected to get an error, but the error doesn't tell me where my problem actually occurred in my main, but in Boost's classes which isn't the most helpful when working in a bigger project.
I started to look into boost visitors to place the error in main, but didn't quite see how those would solve my issue here. Am I on the right track with visitors or am I missing some else?
Edit:
The real question here is invalid assignment of any type not specified in my variant (i.e. char, std::string, double, etc.) not so much converting int16_t to a uint16_t.

int16_t requires an implicit conversion for both uint16_t and uint32_t. Therefore, there is no best overload and the assignment/initialization fails to compile.
Coerce your type:
my_int = static_cast<int32_t>(int16_t_value);

Related

c++ casting for different targets (compilers)

Given the following:
code-link
Here is the code itself for convenience (and I am not sure my link is working):
#include <iostream>
#include <vector>
#include <stdint.h>
using namespace std;
int main()
{
cout<<"Hello World";
std::vector<std::string> test_vect {"1", "2"};
long unsigned int size = static_cast<long unsigned int>(test_vect.size());
std::cout << "size: " << size << std::endl;
return 0;
}
And the following compile options: g++ file.c -Wall -Wextra "-Werror" "-Wuseless-cast"
You can see here that I am casting vector.size() to long unsigned int, this gets flagged up as a useless cast on Wandbox (my link) however the same code running on my linux box does not give a warning - but it will give me a different warning if I dont cast it.
I understand that the two unsigned long vs size_t can be different. But what I am trying to do is write some code that has no warnings with all the casting warnings set (maybe this is optamisitic when cross compiling).
So, one compiler complains that I am converting types, so I cast, but then another compiler is complaining about useless cast - so I remove the cast - and around we go :(
Is there a good approach to this so that I dont get warnings on either compilers?
I was going to just remove the -Wuseless-cast option, but I thought I would see if anyone has other ideas...
what I am trying to do is write some code that has no warnings with all the casting warnings set (maybe this is optamisitic when cross compiling)
It's optimistic when cross compiling if you have casts.
Is there a good approach to this so that I dont get warnings on either compilers?
Don't have casts. Make your variable be of type std::size_t.
I was going to just remove the -Wuseless-cast option
That's the other option.
As Lightness Races in Orbit pointed out, size_t is an option, it should have an appropriate macro to make compilers happy, if you somehow don't want to use that - making your variable auto or decltype(test_vect.size()) would be another option:
auto size = test_vect.size();
or
decltype(test_vect.size()) size = test_vect.size();
One argument that comes to mind is: Why does your size variable need to be long unsigned at all instead of std::size_t?
Just in case there is a compelling reason for that, how about (C++17):
long unsigned size;
if constexpr(sizeof(long unsigned)!=sizeof(std::size_t))
size = static_cast<long unsigned>(...);
else
size = ...;

Convert a series of typedefs to a char array

Thanks in advance for the help.
trying to use a fairly large C/Cpp (yes I know) library and am running into an error. Unfortunately the error that I am getting back is extremely vague but I do know what variable is causing it. All I would like to do is simply print this variable out to stdout so that I can see what is going on. However, this variable is not a standard type. It has the type
Opt_Int64 someVar = 10;
where
typedef int64_t VosT_int64;
typedef VosT_Int64 OpT_Int64;
For the life of me, I can't figure out how to convert this Opt_Int64 type into a char array so that I can print it out. I would assume that the following function would do this for me
void int64ToChar(char mesg[], Opt_Int64 num) {
for(int i = 0; i < 8; i++) mesg[i] = num >> (8-1-i)*8;
}
but it doesn't seem to work (I built a very simple hello world style program to test this). Either I simply don't understand typedef as well as I should or there must be something wrong with the function above. Isn't typedef just a way of telling the compiler that you're giving a type a different name?
int64_t is a standard type, defined in stdint.h. You can print it like this:
printf("%" PRId64 "\n", someVar);
The macro PRId64 and its kin are defined in inttypes.h.

Error in using max function with Armadillo sparse matrices

Here is the code that I am getting error(type mismatch) on line no. with max:
#include <iostream>
#include <stdlib.h>
#include <math.h>
#include<armadillo>
using namespace std;
using namespace arma;
int main(int argc, char** argv) {
umat loc;
loc<<0<<0<<3<<endr
<<2<<4<<4<<endr;
vec val={1,2,3};
sp_mat m(loc,val);
double t=arma::max(sum(square(m),1)) + 1.0;
cout<<t<<endl;
return 0;
}
Can somebody tell me why is that error happening and how to get around this.
Note: cout<<max(sum(square(m),1)) prints the result to console but adding any number to the output flags error.
If you want to convert a 1x1 matrix into a pure scalar (like double), use the as_scalar() function. Same goes for any Armadillo expression that results in a 1x1 matrix.
It's a good idea to read the Armadillo documentation thoroughly before posting questions on Stackoverflow.
Modifying your example:
umat loc = { { 0, 0, 3 },
{ 2, 4, 4 } };
vec val = {1, 2, 3};
sp_mat m(loc,val);
m.print("m:");
max(sum(square(m),1)).print("expression:");
double t = as_scalar( max(sum(square(m),1)) );
cout << t << endl;
You haven't told us (and I can't find in the documentation) exactly what data type is returned by arma::max(sum(square(m),1))
You have tested that whatever it is does not implicitly convert to double and whatever it is can be sent to a stream and when that is done it looks like a double.
My guess is it is something that can be explicitly converted to double so try:
(double)arma::max(sum(square(m),1)) + 1.0
The documentation shows the returned value for a dense matrix being used to initialize a double so that is obviously a type than can be explicitly converted to double. I had initially missed the thing you linked for me effectively saying sum does something on sparse matrix compatible with what it does on dense. So you can almost conclude (rather than just guess) that max(sum(m)) should be the same type (explicitly convertible to double).
If that doesn't work, we will really need a full quote of the error message, not just a summary of what it seems to mean.
Now that we have an error message, we can see this is a flaw in Armadillo's template metaprogramming:
Operations are stacked in template meta programming in order to avoid creating excess temporary objects. Then the meta programming must resolve the whole mess when the result is used.
If this is a minor flaw in the meta programming, you could add just one trivial temporary to fix it:
double t = arma::max(sum(square(m),1));
cout << t+1.0 endl;
But you probably already tried that. So you may need more temporaries and you probably need to give them exact correct types (rather than use auto). My first guess would be:
colvec v = sum(square(m),1);
Then see what works with arma::max(v)
(Earlier I made a negative comment on an answer that suggested starting with auto temporaries for each step. That answer was deleted. It wasn't far wrong. But I'd still say it was wrong to start there without seeing the template meta-programming failures and likely, though I'm not sure, wrong to use auto to try to bypass a meta-programming failure.)

Is it possible to verify method return types using the Boost concept check library?

I've started using the Boost concept check library. However, after reading the documentation, I don't seem to find a way to verify that a method in the concept returns a certain type. However, I don't see anything that says that this isn't possible, either, which is odd.
So, is it possible to write a concept that would fail if a return type wasn't correct?
double pi(){
return 3.1415;
}
int main(){
int int_pi{pi()};
}
When initializing variable using {} requies conversion leading to loss of information, it's compile error.
Alternatively:
#include <type_traits>
int main(){
static_assert(std::is_same<decltype(pi()), double>::value, "pi() must return double");
}
I think the second code doesn't need any comment.

C++ automatic way to make truncations print an error at runtime

I have just joined a team that has thousands of lines of code like:
int x = 0;
x=something();
short y=x;
doSomethingImportantWith(y);
The compiler gives nice warnings saying: Conversion of XX bit type value to "short" causes truncation. I've been told that there are no cases where truncation really happens, but I seriously doubt it.
Is there a nice way to insert checks into each case having the effect of:
if (x>short.max) printNastyError(__FILE,__LINE);
before each assignment? Doing this manually will take more time and effort than I'd like to use, and writing a script that reads the warnings and adds this stuff to the correct file as well as the required includes seems overkill -- especially since I expect someone has already done this (or something like it).
I don't care about performance (really) or anything other than knowing when these problems occur so I can either fix only the ones that really matter, or so I can convince management that it's a problem.
You could try compiling and running it with the following ugly hack:
#include <limits>
#include <cstdlib>
template<class T>
struct IntWrapper
{
T value;
template<class U>
IntWrapper(U u) {
if(u > std::numeric_limits<T>::max())
std::abort();
if(U(-1) < 0 && u < std::numeric_limits<T>::min()) // for signed U only
std::abort();
value = u;
}
operator T&() { return value; }
operator T const&() const { return value; }
};
#define short IntWrapper<short>
int main() {
int i = 1, j = 0x10000;
short ii = i;
short jj = j; // this aborts
}
Obviously, it may break code that passes short as a template argument and likely in other instances, so undefine it where it breaks the build. And you may need to add operator overloads so that the usual arithmetic works with the wrapper.
You can probably write a plugin for gcc to detect those truncation and emit a call to a function that check the conversion is safe. You can write those plugins in C or Python. If you prefer to use clang, it also support writing plugins.
I think the easiest way to do it, would be to have the plugin convert the unsafe cast from int to short to call to a function _convert_int_to_float_fail_if_data_loss(value). I'll leave it as a exercise for the reader how to write such a plugin.