I'm writing a Ruby C Extension against Ruby libs that where compiled with Visual Studio 2010. (I cannot change this, or recompile the Ruby core I'm building against, because it's embedded in a third party application.)
My project started out in Visual Studio 2010, but I later started using Visual Studio 2013 without upgrading the project - so it's still using the VS2010 toolkit.
In the Ruby include headers that I need to use there is a check:
#if _MSC_VER != 1600
#error MSC version unmatch: _MSC_VER: 1600 is expected.
#endif
Now, I thought since I was using the VS2010 toolkit for my project this check would still pass in VS2013. And this is where I'm confused:
The solution builds and creates the .so file which works, there is no warnings or errors in Output, but I just discovered that the Error List displayed a warning. IntelliSense will display 1800 for _MSC_VER - yet it compiles.
1 IntelliSense: #error directive: MSC version unmatch: _MSC_VER: 1600 is expected. c:\Users\Thomas\Documents\subd\ThirdParty\include\ruby\win32\i386-mswin32_100\ruby\config.h 4
So what is going on here?
Isn't Visual Studio using the VS2010 compiler when my toolkit is set to v100? (Then I'd expect _MSC_VER to be 1600.)
Is the #warning directive not something that blocks the compiler?
Is it IntelliSense that isn't picking up the toolkit version and instead always assume _MSC_VER is 1800?
_MSC_VER is directly tied to a toolkit, not the IDE.
The buggy Intelisense also behaves the same in VS2012 (with Vs2010 toolkit)...
Related
I have a c++ project that compiles well under Visual Studio 2013.
Today I installed Visual Studio 2017 Professional Edition, then there's a new setting in project settings > General called "Windows SDK Version", by default is 10.0.16299.0. Since I'm compiling windows desktop programs for targeting Windows 7 systems, I changed it to 8.1, is this correct?
Generally speaking, a Windows SDK supports its "main" version and also the previous ones, but you need to specify what Windows version your program will need. In fact, you're better off doing so or else you can inadvertently use features not available in the version you want to support.
Given an SDK, you indicate which older Windows version to target by defining the WINVER and _WIN32_WINNT macros somewhere in your project files or in the C/C++ Preprocessor project settings in Visual Studio.
For example, the following definitions target Windows 7:
#define WINVER 0x0601
#define _WIN32_WINNT 0x0601
For more information, see Using the Windows Headers and Modifying WINVER and _WIN32_WINNT
Indeed I raised this issue because my freshly installed Visual Studio could not build the VM because SDK 16299 is now indeed the default. It's mentioned here:
https://en.wikipedia.org/wiki/Microsoft_Windows_SDK.
.
Also MS does not make finding older SDK's very easy. You have to click through to another page all the way on the end of this page:
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk
Even though I googled on "Microsoft Windows SDK 15063".
.
So all-in-all it's now a small chore for newbies to get up and running on the VM. To start, I think it should be made as easy as possible. (Complexity will come soon after that :)).
.
PS I'm not sure about Windows 7 compatibility. But the current VM SDK is also listed as being for Windows 10.
I've recently updated my VS 2017 and now I cannot even build a default CUDA project (the one with the vector addition).
I suspect that this is due to the following error:
Severity Code Description Project File Line Suppression State
Error C1189 #error: -- unsupported Microsoft Visual Studio version!
Only the versions 2012, 2013, 2015 and 2017 are supported! ver2
c:\program files\nvidia gpu computing
toolkit\cuda\v9.0\include\crt\host_config.h 133
The other errors are irrelevant and will disappear once I fix this one. Note, that I am able to build and run simpleCUFFT from CUDA samples.
Before the update I was able to build the default CUDA project but I was not able to build the CUDA Sample project. I've updated my VS2017 using VS installer and installed CUDA SDK 10.0.15063.0. Attached is the screenshot with the installed components.
Please let me know if any additional information is required. I am aware of the following topic and since I am using the latest CUDA toolkit, I don't need to make changes in host_config.h.
Thanks,
Mikhail
Edit:
My VS version (as displayed in VS installer) is 15.5.0
My nvcc version is release 9.0, V9.0.176
Edit2: I've tryied to change host_config.h line 133 to:
#if _MSC_VER < 1600 || _MSC_VER > 1912
This error does not show up anymore, however, a bunch of errors "expression must have a constant value" show up in the file type_trails. I have no clue how to fix it.
After some painful time, I was able to solve the problem. Here is the answer for those who have a similar problem:
1) Make sure that you have VC++ 2015.3 v140 toolset (can be installed either from web or Visual Studio installer)
2) In the project properties (general) -> Platform toolset choose Visual Studio 2015 (v140).
Edit (5/21/2018): I've just updated Visual studio 2017 to the latest version 15.7.1. From now on, I can choose VS 2017 v141 toolset and it works fine.
I'm using CUDA 9.2 and VS 2017 (Version 15.7.5). Simply Modifying host_config.h (usually under C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.*\include\crt, can be found in the VS output from build) works for me.
Change the line
#if _MSC_VER < 1600 || _MSC_VER > 1913
to
#if _MSC_VER < 1600 || _MSC_VER > 1914
or something similar, based on the version of cl.exe
In VS update 15.4.3 Microsoft updated version number of their CL compiler to 14.12
(https://www.visualstudio.com/ru-ru/news/releasenotes/vs2017-relnotes#15.4.4)
That's why CUDA 9.0.176 refuse to compile.
Today NVIDIA updated CUDA to 9.1.85, so just update
just update CUDA to 9.1
https://developer.nvidia.com/cuda-downloads
For anyone that's reading this question, update in CUDA 10. It works right out of the box. No need to install previous compiler tool-sets and the like mentioned in other answers. Simply download CUDA 10, install it and uninstall previous CUDA versions. Then make a new CUDA 10 project and place your code. It will work.
If you're getting errors, don't forget to set compute_xx,sm_xx appropriately, in Project Properties -> CUDA C/C++ -> Device -> Code Generation.
I have a VS 2010 C++/CLI project that I imported into VS 2015. Because it links against some VS 2010 C++ libraries that I don't control, I need it to be compiled with the VS 2010 compiler. When I imported the project to VS 2015, I told VS not to upgrade the project (don't remember the exact options, but the project now says "Project Name (Visual Studio 2010)" in Solution Explorer). In the project properties, the Platform Toolset is listed as "Visual Studio 2010 (v100)":
One of the libraries I'm using has a header with some ifdefs that show an error if _MSC_VER isn't one of the supported versions (Visual C++ 6.0-10.0). That error is being generated for this project, and I've determined that the _MSC_VER showing up during build in Intellisense is 1900 (the default for VS 2015).
How do I get the project to build with the 2010 version of the C++ compiler (_MSC_VER 1600)? Isn't that what the Platform Toolset option is supposed to control?
I misunderstood what was going on in Visual Studio. The build was actually working fine; the error being generated was coming from Intellisense. There's a known bug in Visual Studio where Intellisense doesn't properly reflect the _MSC_VER specified by the project's selected Platform Toolset. I'll leave the question up in case anyone else runs into this problem.
I included the usage of python scripts for my c++ project in Visual Studio 2010 like described in the CodeProject article: http://www.codeproject.com/Articles/11805/Embedding-Python-in-C-C-Part-I
This was working fine until I tried to compile my project with Visual Studio 2012. To compile it with 2012 if 2010 is NOT installed it's required to change the platform toolset from v100 to v110. After changing the toolset the included "pyconfig.h" gives some include error, because the file "basetsd.h" is not found (with python 2.7 and 3.3 the same). The pyconfig shows some #ifdef what is working for VS10 (and I think down to VS6) but the file for VS12 seems to be missing:
#if defined(_MSC_VER) && _MSC_VER >= 1200
/* This file only exists in VC 6.0 or higher */
#include <basetsd.h>
#endif
If I add some include path (Windows Toolkit) or delete the include command it compiles until the linker cannot find or open the "kernel32.lib". However, if I add a lib path for some kernel32.lib all python commands will unresolved.
How to get that work? What's wrong with Python and VS2012?
Perhaps your version of "Python.h" is only compatible with Visual Studio 2010. I attached a link of "Python.h" that claims to be for VS2012, try it out and let us know if it solves your issue.
http://pytools.codeplex.com/releases
It's running now with the following lib paths:
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Lib;C:\Python27\libs
But I still don't understand it...
I'm trying to build a C++ app on Windows using Qt.
My setup is:
Installed Vs2008,2010,2012
Installed Qt 5 RC1
Now when I #include and try to use std::unique_ptr it tells me that its not defined, so I looked in VS2010 headers and saw that _HAS_CPP0X needs to be defined, so I added it to the .pro as DEFINES += _HAS_CPP0X
This still had no effect, so I ctrl+clicked the #include memory only to find its using the memory header from:
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include
Which really doesn't have any std::unique_ptr in there!
Surely it should be looking at:
C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include ?
I figured I'd include memory via the full path but this still fails with errors in the included memory header itself relating to C++11 things such as move and rvalue references.
So what I'd like to know is:
Can Qt on Windows use C++11 features supported by Vs2010?
If yes then how?
If no then I'm very disappointed as developing a cross platform Qt 5 app on Linux means its not cross platform since its impossible to build it for any other platforms!
Edit:
Just so the solution to this is clear: Download the source of Qt5 and build it with MinGW and you'll be all set (inc the C++11 pro option in the accepted answer).
You can simply put:
CONFIG += c++11
in your .pro file. But upgrading your compiler to a recent g++ or clang is highly recommended.
I'm running QT Creator 2.6.0 so assuming the option menu hasn't changed if you go to Tools > Options > Build & Run and then look for the Compilers tab you should see a list of Automatically detected compilers, hopefully including Microsoft Visual C++ Compiler 10.0. If not it can be manually added.
As for C++11 support, if you are using cmake and imported the project into QT Creator as such you can add this to your cmake file: set(CMAKE_CXX_FLAGS "-std=c++0x")
If you are using qmake then per the manual set QMAKE_CXXFLAGS += -std=c++0x
Edit:
That said Visual Studio 2010 (Visual C++ 10.0) turns on c++11 support by default although it should be noted it is only a subset, here is a related question.
NO
Can Qt on Windows use C++11 features supported by Vs2010?
You have to get Visual 2012 for that.
If no then I'm very disappointed as developing a cross platform Qt 5 app on Linux means its not cross platform since its impossible to build it for any other platforms!
On mac, use XCode 4 compiler CLang 2 and on any platform a compiler based on GCC 4.7 works