std::destroy_at differences between major compilers? - c++

Using compiler explorer with:
#include <iostream>
#include <memory>
struct test
{
test(int i)
{
std::cout << "test::test("<<i<<")\n";
}
~test()
{
std::cout << "~test()\n";
}
};
template<>
void std::destroy_at(test* p)
{
std::cout<<"std::destroy_at<test>\n";
p->~test();
}
int
main ()
{
auto sp = std::make_shared<test>(3);
return 33;
}
Gives the expected output using C++20 with gcc x86-64 or clang x86-64:
Program returned: 33
test::test(3)
std::destroy_at<test>
~test()
But x64 msvc v19.32 gives:
Program returned: 33
test::test(3)
~test()
As if the std::destroy_at has no effect here.
Is this conforming behavior, my misunderstanding or a msvc non conformance or misconfiguration?

Specializing standard library functions is UB since C++20.

Related

Why is gcc not catching an exception from a multi target function?

I'm using the target attribute to generate different function implementations depending on the CPU architecture. If one of the functions throws an exception it doesn't get caught if I compile with gcc, but with clang it works as expected.
If there is only a single implementation of the function it does work for gcc as well.
Is this a bug in gcc?
Example (godbolt):
#include <stdexcept>
#include <iostream>
using namespace std;
__attribute__((target("default")))
void f() {
throw 1;
}
__attribute__((target("sse4.2,bmi")))
void f() {
throw 2;
}
int main()
{
try {
f();
}
catch(... )
{
std::cout << "Caught exception" << std::endl;
}
}
Output of gcc:
terminate called after throwing an instance of 'int'
Output of clang:
Caught exception
I reported this and a GCC developer confirmed it as a bug: link
For now a workaround seems to wrap the function and use the gnu::noipa attribute to disable interprocedural optimizations:
__attribute__((target("default")))
void f() {
throw 1;
}
__attribute__((target("sse4.2")))
void f() {
throw 2;
}
[[gnu::noipa]]
void f1()
{
f();
}
int main()
{
try {
f1();
}
catch(... )
{
return 0;
}
return 1;
}
The bug is now fixed in gcc's master branch and should be released with gcc version 13.

g++ not destructing thread locals

On my machine, GCC is not calling the destructors of thread_locals.
The code runs fine on clang 7 and Visual Studio.
Is this a bug?
I'm using MinGW GCC 8.1 on Windows.
Thread model: posix
gcc version 8.1.0 (x86_64-posix-seh-rev0, Built by MinGW-W64 project)
Here's the code:
#include <iostream>
struct A {
A() {
std::cout << "A()\n";
}
~A() {
std::cout << "~A()\n";
}
void
print() {
std::cout << "A\n";
}
};
thread_local A a;
int
main() {
a.print();
return 0;
}

Rvalue and forwarding references to C-style arrays

Playing with rvalue- and forwarding references to C-style arrays, I stumbled upon weird behaviour. At first I thought it was due to my lack of understanding how C-style arrays bind to && references, but now, after reading this related question, it seems to be an MS C++ compiler bug.
The code:
#include <boost/type_index.hpp>
#include <iostream>
#include <utility>
template<typename T>
void foo(T&&) {
std::cout << boost::typeindex::type_id_with_cvr<T>() << '\n';
}
void bar(int(&)[2]) {
std::cout << "int(&)[2]\n";
}
void bar(int(&&)[2]) {
std::cout << "int(&&)[2]\n";
}
int main() {
int arr[2];
// MSVS 2018 / gcc 7.1.0 & clang 5.0.0
foo(arr); // Outputs: int(&)[2] / int(&)[2]
foo(std::move(arr)); // Outputs: int(&)[2] / int[2]
bar(arr); // Outputs: int(&)[2] / int(&)[2]
bar(std::move(arr)); // Outputs: int(&)[2] / int(&&)[2]
return 0;
}
Are gcc/clang right here? Can this be classified as an MS compiler bug?

Is this Clang's bug or my bug?

Run the the following C++ program twice. Once with the given destructor and once with std::fesetround(value); removed from the destructor. Why do I receive different outputs? Shouldn't destructor be called after function add? I ran both versions on http://cpp.sh/ and Clang++ 6.0, and g++ 7.2.0. For g++, I also included #pragma STDC FENV_ACCESS on in the source code, nothing changed.
#include <iostream>
#include <string>
#include <cfenv>
struct raii_feround {
raii_feround() : value(std::fegetround()) { }
~raii_feround() { std::fesetround(value); }
inline void round_up () const noexcept { std::fesetround(FE_UPWARD ); }
inline void round_down() const noexcept { std::fesetround(FE_DOWNWARD); }
template<typename T>
T add(T fst, T snd) const noexcept { return fst + snd; }
private:
int value; };
float a = 1.1;
float b = 1.2;
float c = 0;
float d = 0;
int main() {
{
raii_feround raii;
raii.round_up();
c = raii.add(a, b);
}
{
raii_feround raii;
raii.round_down();
d = raii.add(a, b);
}
std::cout << c << "\n"; // Output is: 2.3
std::cout << d << "\n"; // Output is: 2.3 or 2.29999
}
Using the floating-point environment facilities requires inserting #pragma STDC FENV_ACCESS on into the source (or ensure that they default to on for the implementation you are using. (Although STDC is a C feature, the C++ standard says that these facilities are imported into C++ by the <cfenv> header.)
Doing so at cpp.sh results in “warning: ignoring #pragma STDC FENV_ACCESS [-Wunknown-pragmas]”.
Therefore, accessing and modifying the floating-point environment is not supported by the compiler at cpp.sh.
All I needed to do was to do std::cout << std::setprecision(30); before calling std::cout in the code (iomanip should be included as well).

Collision of nested typenames in MSVC

I have encountered a problem while was using variadic templates, but the problem is not related to variadic templates and can be reproduced without them.
The problem is related to same names of types in inherited and base classes.
I've simplified code to the following snippet:
#include <typeinfo>
#include <iostream>
struct A
{
int member;
};
struct B: public A
{
typedef A NEXT;
short member;
};
struct Foo: public B
{
typedef B NEXT;
void Check()
{
std::cout << typeid(NEXT::member).name() << '\n';
std::cout << typeid(NEXT::NEXT::member).name() << '\n';
NEXT::NEXT::member = 1;
NEXT::member = 2;
std::cout << NEXT::NEXT::member << '\n';
std::cout << NEXT::member << '\n';
};
};
int main()
{
Foo foo;
foo.Check();
}
It compiles without warnings and works with GCC and clang, but it produces wrong output with MSVC (at least with Visual Studio 2015 Update 1).
It seems that some collision in names happens, and it treats NEXT::NEXT::member as NEXT::member, but in the same place, it lies to typeid (as well as to std::is_same).
When compiled with GCC or clang the output is:
s
i
1
2
But in case of MSVC, the output is:
short
int
2
2
The following code compiles without warnings with GCC or clang, but as expected produces error with MSVC:
struct A
{
int member;
};
struct B: public A
{
typedef A NEXT;
short member;
};
struct Foo: public B
{
typedef B NEXT;
short& GetB() { return NEXT::member; };
int& GetA() { return NEXT::NEXT::member; };
};
The MSVC error:
source_file.cpp(18): error C2440: 'return': cannot convert from 'short' to 'int &'
Is the code above valid? Is it a problem of MSVC or I'm utilizing UB somehow?
Update
The problem can not be reproduced in MSVC 2015 update 3.