C4389 signed/unsigned mismatch only for x86 compilation c++ - c++

I am seeing a C4389: "'==': signed/unsigned mismatch" compiler warning when I execute the following code in Visual Studio using the x86 compiler using a warning level of 4.
#include <algorithm>
#include <vector>
void functionA()
{
std::vector<int> x(10);
for(size_t i = 0; i < x.size(); i++)
{
if (std::find(x.begin(), x.end(), i) != x.end())
continue;
}
}
Can someone explain to me why this happens and how I can resolve this?
Here is a link https://godbolt.org/z/81v3d5asP to the online compiler where you can observe this problem.

You have declared x as a vector of int – but the value you are looking for in the call to std::find (the i variable) is a size_t. That is an unsigned type, hence the warning.
One way to fix this is to cast i to an int in the call:
if (std::find(x.begin(), x.end(), static_cast<int>(i)) != x.end())
Another option (depending on your use case) would be to declare x as a vector of an unsigned integer type:
std::vector<unsigned int> x(10);

Related

Error with Lambda Expression C++ in VS Code

I'm trying to sort an array m of class measure using a lambda expression as follows:
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
#include <math.h>
using namespace std;
struct measure{
int day;
int cow;
int change;
};
int main()
{
int N;
cin >> N;
measure m[N];
for (int i = 0; i < N; i++){
measure m_i;
cin >> m_i.day >> m_i.cow >> m_i.change;
m[i] = m_i;
}
sort(m, m + N, [](measure a, measure b) {return a.day < b.day;});
}
However, an error occurs when trying to build the task in VS Code (using C++17):
error: expected expression
sort(m, m + N, [](measure a, measure b) {return a.day < b.day;});
^
1 error generated.
Build finished with error(s).
I've tested this code on other compilers with no difficulties. Why is this error happening on VS Code?
Okay, so it does not look like you are compiling with c++17. I can reproduce this error if I roll back the gcc version to 4.x and leave the standard up to the compiler. Here is how it would look on an older compiler. Here is how it would run properly on a newer compiler. Most likely that you are using something like c++98 or c++03.
Note - I have taken the liberty of modifying your code to make it more C++. Also, please stop with using namespace std.
A simple fix is to just make sure that you are using the right version of the compiler basis the features you need. Lambdas in C++ were introduced in C++11, so clearly you are using a compiler that is at a much lower version.
VS Code does not have a compiler built-in. It uses the system compiler you have installed. You can configure things to use the system compiler and pass the flag --std=c++17 to the compiler command line.
Both gcc and clang++ support this command line flag.

What is "int (*arr)[cols]" where "cols" is a variable, in C++?

I am reading this to consider about how to dynamically allocate memory for a two-dimensional array.
I notice that a variable value cols can be used as size to define int (*arr)[cols], as C language has variable-length arrays(VLA) feature, then I try modifying the code into C++ like:
#include <cstddef>
#include <cstdio>
#include <cstdlib>
#include <cstring>
void* allocate(size_t rows, size_t cols)
{
int (*arr)[cols] = (int (*)[cols])malloc(rows *sizeof(*arr));
memset(arr, 0, rows *sizeof(*arr));
return arr;
}
int main() {
size_t rows, cols;
scanf("%zu %zu", &rows, &cols);
int (*arr)[cols] = (int (*)[cols])allocate(rows, cols);
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
printf("%3d", arr[i][j]);
}
printf("\n");
}
}
compile with gcc 11.2 -std=c++11
To my surprise, this works well and compiler does not report any warning. AFAIK C++ has no VLA feature, I used to think this code should be forbidden. So why could this work?
-std=c++11 doesn't mean "compile strictly according to C++11" but "enable C++11 features." Just as -std=gnu++11 (the default setting) means enable gnu++11 features, which is a superset of C++11.
To get strictly compliant behavior, you must use -std=c++11 -pedantic-errors. And then you get this:
error: ISO C++ forbids variable length array 'arr' [-Wvla]
See What compiler options are recommended for beginners learning C? for details. It was written for C but applies identically to g++ as well.

Why am I getting an error with MSVC when using a C++17 parallel execution algorithm on Boost zip iterators?

I have some trouble using C++17 parallel execution algorithm with Boost iterators on MSVC. Here is my code:
#include <vector>
#include <execution>
#include <boost/range/combine.hpp>
int main(void)
{
std::vector<double> const v1(20, 1);
std::vector<double> const v2(20, 2);
std::vector<double> v_out;
v_out.resize(20);
auto const & combi = boost::combine(v1, v2);
auto const run = [](auto const & v)
{
return boost::get<0>(v) + boost::get<1>(v);
};
std::transform(std::execution::par, combi.begin(), combi.end(), v_out.begin(), run);
return 0;
}
I get the following error:
error C2338: Parallel algorithms require forward iterators or stronger.
This seems due to the Boost zip iterator but I don't understand why since it compiles without error on GCC. What am I missing here ? Is this a bug implementation in Visual C++ ?
zip iterators can only be input in the C++17 iterator hierarchy because their reference types are not real references.
It is undefined behavior to passing an input iterator to a parallel algorithm. MSVC's implementation happens to check the precondition more aggressively than GCC's.

How to convert from boost::multiprecision::cpp_int to cpp_dec_float<0> (rather than to cpp_dec_float_50, etc.)?

As is made clear in the Boost Multiprecision library documentation, it is straightforward to convert from a boost::multiprecision::cpp_int to a boost::multiprecision::cpp_dec_float:
// Some interconversions between number types are completely generic,
// and are always available, albeit the conversions are always explicit:
cpp_int cppi(2);
cpp_dec_float_50 df(cppi); // OK, int to float // <-- But fails with cpp_dec_float<0>!
The ability to convert from a cpp_int to a fixed-width floating-point type (i.e., a cpp_dec_float_50) gives one hope that it might be possible to convert from a cpp_int to an arbitrary-width floating-point type in the library - i.e., a cpp_dec_float<0>. However, this doesn't work; the conversion fails for me in Visual Studio 2013, as the following simple example program demonstrates:
#include <boost/multiprecision/number.hpp>
#include <boost/multiprecision/cpp_int.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
boost::multiprecision::cpp_int n{ 0 };
boost::multiprecision::cpp_dec_float<0> f{ n }; // Compile error in MSVC 2013
}
It does succeed to convert to cpp_dec_float_50, as expected, but as noted, I am hoping to convert to an arbitrary precision floating point type: cpp_dec_float<0>.
The error appear in the following snippet of code from the internal Boost Multiprecision code, in the file <boost/multiprecision/detail/default_ops.hpp>:
template <class R, class T>
inline bool check_in_range(const T& t)
{
// Can t fit in an R?
if(std::numeric_limits<R>::is_specialized && std::numeric_limits<R>::is_bounded
&& (t > (std::numeric_limits<R>::max)()))
return true;
return false;
}
The error message is:
error C2784:
'enable_if::result_type,detail::expression::result_type>,bool>::type
boost::multiprecision::operator >(const
boost::multiprecision::detail::expression
&,const
boost::multiprecision::detail::expression &)' :
could not deduce template argument for 'const
boost::multiprecision::detail::expression &'
from 'const next_type'
Might it be possible to convert a boost::multiprecision::cpp_int to a boost::multiprecision::cpp_dec_float<0> (rather than converting to a floating-point type with a fixed decimal precision, as in cpp_dec_float_50)?
(Note that in my program, only one instance of the floating-point number is instantiated at any time, and it is updated infrequently, so I am fine with having this one instance take up lots of memory and take a long time to support really huge numbers.)
Thanks!
I don't have much experience with Boost Multiprecision, but it seems to me that the template class cpp_dec_float<> is what they call a backend, and you need to wrap it in a number<> adaptor in order to use it as an arithmetic type.
Here's my take on it: Live On Coliru
#include <boost/multiprecision/number.hpp>
#include <boost/multiprecision/cpp_int.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <iostream>
namespace mp = boost::multiprecision;
int main()
{
using Int = mp::cpp_int;
// let's think of a nice large number
Int n = 1;
for (Int f = 42; f>0; --f)
n *= f;
std::cout << n << "\n\n"; // print it for vanity
// let's convert it to cpp_dec_float
// and... do something with it
using Dec = mp::number<mp::cpp_dec_float<0> >;
std::cout << n.convert_to<Dec>();
}
Output:
1405006117752879898543142606244511569936384000000000
1.40501e+51
If convert_to<> is allowed, then the explicit conversion constructor will also work, I expect:
Dec decfloat(n);

Why a variable length array compiles in this c++ program?

It is said that arrays are allocated at compile time, then the size must be const and available at compile time.
But the following example also works, Why?
#include <iostream>
#include <vector>
using namespace::std;
int main()
{
vector<int> ivec;
int k;
while(cin>>k)
ivec.push_back(k);
int iarr[ivec.size()];
for (size_t k=0;k<ivec.size();k++)
{
iarr[k]=ivec[k];
cout<<iarr[k]<<endl;
}
return 0;
}
Compile your code with -pedantic.
Most compilers support variable length arrays through compiler extensions.
The code works due to the compiler extensions, However as you noted the code is non standard conforming and hence non portable.