Visual C++ 2013 -- _tzname and _timezone - c++

I have a C++ Date/Time library that I have literally used for decades. It's been rock solid without any issues. But today, as I was making some small enhancements, my test code started complaining violently. This following program demonstrates the problem :
#include <iostream>
#include <time.h>
void main(void) {
_tzset();
std::cout << "_tzname[ 0 ]=" << _tzname[ 0 ] << std::endl;
std::cout << "_tzname[ 1 ]=" << _tzname[ 1 ] << std::endl;
std::cout << "_timezone=" << _timezone << std::endl;
size_t ret;
char buf[ 64 ];
_get_tzname(&ret,buf,64,0);
std::cout << "_get_tzname[ 0 ]=" << buf << std::endl;
_get_tzname(&ret,buf,64,1);
std::cout << "_get_tzname[ 1 ]=" << buf << std::endl;
}
If I run this in the Visual Studio Debugger I get the following output :
_tzname[ 0 ]=SE Asia Standard Time
_tzname[ 1 ]=SE Asia Daylight Time
_timezone=-25200
_get_tzname[ 0 ]=SE Asia Standard Time
_get_tzname[ 1 ]=SE Asia Daylight Time
This is correct.
But if I run the program from the command line I get the following output :
_tzname[ 0 ]=Asi
_tzname[ 1 ]=a/B
_timezone=0
_get_tzname[ 0 ]=Asi
_get_tzname[ 1 ]=a/B
Note that the TZ environment variable is set to : Asia/Bangkok, which is a synonym for SE Asia Standard Time or UTC+7. You will notice in the command line output that the tzname[ 0 ] value is the first 3 characters of Asia/Bangkok and tzname[ 1 ] is the next 3 characters. I have some thoughts on this, but I cannot make sense of it, so I'll just stick to the facts.
Note that I included the calls to _get_tzname(...) to demonstrate that I am not getting caught in some kind deprecation trap given that _tzname and _timezone are deprecated.
I'm on Windows 7 Professional and I am linking statically to the runtime library (Multi-threaded Debug (/MTd)). I recently installed Visual Studio 2015 and while I am not using it yet, I compiled this program there and the results are the same. I thought there was a chance that I was somehow linking with the VS2015 libraries but I cannot verify this. The Platform Toolset setting in both projects reflects what I would expect.
Thank you for taking the time to look at this...

Related

std::cos gives different result when run with valgrind

I've discovered an issue impacting several unit tests at my work, which only happens when the unit tests are run with valgrind, in that the value returned from std::cos and std::sin are different for identical inputs depending on if the unit test is run in isolation versus run under valgrind.
This issue only seems to happen for some specific inputs, because many unit tests pass which run through the same code.
Here's a minimally reproducible example (slightly worsened so that my compiler wouldn't optimize away any of the logic):
#include <complex>
#include <iomanip>
#include <iostream>
int main()
{
std::complex<long double> input(0,0), output(0,0);
input = std::complex<long double>(39.21460183660255L, -40);
std::cout << "input: " << std::setprecision(20) << input << std::endl;
output = std::cos(input);
std::cout << "output: " << std::setprecision(20) << output << std::endl;
if (std::abs(output) < 5.0)
{
std::cout << "TEST FAIL" << std::endl;
return 1;
}
std::cout << "TEST PASS" << std::endl;
return 0;
}
Output when run normally:
input: (39.21460183660254728,-40)
output: (6505830161375283.1118,117512680740825220.91)
TEST PASS
Output when run under valgrind:
input: (39.21460183660254728,-40)
output: (0.18053126362312540976,3.2608771240037195405)
TEST FAIL
Notes:
OS: Red Hat Enterprise Linux 7
Compiler: Intel OneAPI 2022 Next generation DPP/C++ Compiler
Valgrind: 3.20 (built with same compiler), also occurred on official distribution of 3.17
Issue did not manifest when unit tests were built with GCC-7 (cannot go back to that compiler) or GCC-11 (another larger bug with boost prevents us from using this with valgrind)
-O0/1/2/3 make no difference on this issue
only compiler flag I have set is "-fp-speculation=safe", which otherwise if unset causes numerical precision issues in other unit tests
Is there any better ways I can figure out what's going on to resolve this situation, or should I submit a bug report to valgrind? I hope this issue is benign but I want to be able to trust my valgrind output.

Did Visual Studio 2022 17.4.3 break std::round?

(Note: This problem occurs for me only when the compiler switch /arch:AVX is set. More at the bottom)
My gtest unit tests have done this for 7 years
ASSERT_EQ(-3.0, std::round(-2.5f)); // (Note the 'f' suffix)
According to cpp-reference, std::round is supposed to round AWAY from zero, right? Yet with the current release, this test just started failing. Am I missing something? All I did was update my Visual Studio 2022 to 17.4.3 My co-worker with 17.3.3 does not have this problem
EDIT: I don't know if the problem is GTEST and its macros or assumptions my unit test makes about equality. I put the following two lines of code into my test
std::cerr << "std::round(-2.5) = " << std::round(-2.5) << std::endl;
std::cerr << "std::round(-2.5f) = " << std::round(-2.5f) << std::endl;
They produce the following output. The second one is wrong, is it not?
std::round(-2.5) = -3
std::round(-2.5f) = -2
EDIT #2: As I note above, the only occurs when I set the compiler flag /arch:AVX If just create a console app and do not set the flag of if I explicitly set it to /arch:IA32, the problem goes away. But the question then becomes: Is this a bug or am I just not supposed to use that option?
This is a known bug, see the bug report on developercommunity, which is already in the "pending release" state.
For completeness/standalone sake, the minimal example from there is (godbolt):
int main()
{
std::cout << "MSVC version: " << _MSC_FULL_VER << '\n';
std::cout << "Round 0.5f: " << std::round(0.5f) << '\n';
std::cout << "Round 0.5: " << std::round(0.5) << '\n';
}
compiled with AVX or AVX2.
The correct output e.g. with MSVC 19.33 is
MSVC version: 193331631
Round 0.5f: 1
Round 0.5: 1
while the latest MSVC 19.34 outputs
MSVC version: 193431931
Round 0.5f: 0
Round 0.5: 1

MPI stopped working on multiple cores suddenly

This piece of code was working fine before with mpi
#include <mpi.h>
#include <iostream>
using namespace std;
int id, p;
int main(int argc, char* argv[])
{
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &p);
cout << "Processor " << id << " of " << p << endl;
cout.flush();
MPI_Barrier(MPI_COMM_WORLD);
if (id == 0) cout << "Every process has got to this point now!" << endl;
MPI_Finalize();
}
Giving the output:
Processor 0 of 4
Processor 1 of 4
Processor 2 of 4
Processor 3 of 4
Every process has got to this point now!
When run on 4 cores with the command mpiexec -n 4 ${executable filename}$
I restarted my laptop (i'm not sure if this is the cause) and ran the same code and it outputs on one core:
Processor 0 of 1
Every process has got to this point now!
Processor 0 of 1
Every process has got to this point now!
Processor 0 of 1
Every process has got to this point now!
Processor 0 of 1
Every process has got to this point now!
I'm using the microsoft mpi and the project configurations haven't changed.
I'm not really sure what to do about this.
I also installed intel parallel studio and integrated it with visual studio before restarting.
But i'm still compiling with Visual c++ (Same configurations as of when it was working fine)
The easy fix was to uninstall intel parallel studio

Unexpected output with boost coroutine2

This is the test case
#include <boost/coroutine2/all.hpp>
#include <iostream>
#include <cassert>
int main() {
auto sum = 0;
using Coroutine_t = boost::coroutines2::coroutine<int>::push_type;
auto coro = Coroutine_t{[&](auto& yield) {
for (;;) {
auto val = yield.get();
std::cout << "Currently " << val << std::endl;
sum += val;
yield(); // jump back to starting context
}
}};
std::cout << "Transferring 1" << std::endl;
coro(1); // transfer {1} to coroutine-function
std::cout << "Transferring 2" << std::endl;
coro(2); // transfer {1} to coroutine-function
// assert(sum == 3);
}
For some reason the assert at the end fails with the value of sum being 14 I installed boost (version 1.63) context with the command
./bootstrap.sh --prefix=build --with-libraries=context
./b2 --prefix=build --with-context
I am running this on a MacOS 10.12.6. The compile command was
g++ -std=c++14 -O3 -I boost co.cpp boost/stage/lib/libboost_*.a
Where boost is the boost folder downloaded from sourceforge.
The output of the above test case strangely without the assert is this
Transferring 1
Currently 0
Transferring 2
Currently 2
Currently 2
Why is the first line printed in the coroutine Currently 0? Also why is Currently 2 is printed twice here?? The latter can be seen here as well https://wandbox.org/permlink/zEL9fGT5MrzWGgQB
For the second question it seems like after the main thread has finished, control is transferred back to the coroutine one last time. Why is that? That seems strange..
UPDATE : For the second question, it seems to be different in boost 1.65??!? https://wandbox.org/permlink/JQa9Wq1jp8kB49Up
output of your app with boost-1.65.1 is:
Transferring 1
Currently 1
Transferring 2
Currently 2
probably your problem was caused by bug that has been fixed in boost-1.63

Running OpenCL under Windows 10

I want to run OpenCL application under Windows 10 using my GTX970 graphics card. But the following code doesn't work =(
#define __CL_ENABLE_EXCEPTIONS
#include <CL/cl.hpp>
#include <CL/cl.h>
#include <vector>
#include <fstream>
#include <iostream>
#include <iomanip>
int main() {
std::vector<cl::Platform> platforms;
std::vector<cl::Device> devices;
try
{
cl::Platform::get(&platforms);
std::cout << platforms.size() << std::endl;
for (cl_uint i = 0; i < platforms.size(); ++i)
{
platforms[i].getDevices(CL_DEVICE_TYPE_GPU, &devices);
}
std::cout << devices.size() << std::endl;
}
catch (cl::Error e) {
std::cout << std::endl << e.what() << " : " << e.err() << std::endl;
}
return 0;
}
It gives me error code -1. I am using Visual Studio 2015 Community Edition to launch it with installed NVIDIA CUDA SDK v8.0 and configured paths, so compiler and linker knows about SDK.
Can someone please explain what's wrong with this snippet?
Thanks in advance!
EDIT: Can someone also explain me, why when i try to debug this code it falls when getting platform id, however, when i do not debug this code it prints that i have 2 platforms(my GPU card and integerated GPU)
Probably your iGPU is Intel(I assume you did a combo of gtx970 + intel cpu for gaming) which also has some experimental opencl 2.1 platform support that could give error for an opencl 1.2 app at device picking or platform picking(I had a similar problem).
You should check returned error codes from opencl api commands. Those give better info about what happened.
For example, my system has two platforms for Intel, one being experimental 2.1 only for cpu and one being normal 1.2 for both gpu and cpu.
To check that, query platform version and check its parameter-returned 7th and 9th char values against 1 and 2 for 1.2 or 2 and 0 for 2.0. This should elliminate experimental 2.1 which gives 2 at 7th char and "1" at 9th char (where indexing starts at 0 ofcourse)
https://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/clGetPlatformInfo.html
CL_PLATFORM_VERSION
Check for both numpad number keycodes and left hand side numbers keycodes.
Nvidia must have 1.2 support already.
If I'm right, you may query for cpu devices and have 2 from Intel and 1 from Nvidia(if it has any) in return.