I'm using VC++ to disassemble a very simple program I've written:
#include <iostream>
using namespace std;
int main()
{
for(int i = 0; i < 11; i++)
{
cout << i << endl;
}
return 0;
}
I was hoping to shed some light on how cout works, but upon inspection, the resulting ASM points to an external source (I assume):
EXTRN __imp_?cout#std##3V?$basic_ostream#DU?$char_traits#D#std###1#A
Is there a way to identify, from the above line, where specifically this points to, and how to access it? Even still, how to read the above line?
You don't have to disassemble that. The MS sources of the streams are part of Visual Studio installation. See: "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\crt\src"
cout is provided by the C++ runtime. In the case of Visual C++ that would be MSVCPxxxx.dll (xxxx depending on the version and debug/release).
You can lookup that stuff by using something like "CFF Explorer" or "depedency walker" and look at the program import directory.
Related
I am using visual studio 2015 in 32bit mode to compile a dll(VSDll) which calls the functions written in another dll (MATLAB function exported as dll by MATLAB C++ compiler). This VSDLL is then called by an .exe file. Now, I can successflly compile the code into debug and release DLLs. I can also successfully run the release dll from the .exe file. It runs without any problem. But when I try to run the code in debug mode in visual studio I get the following error Error message.
This figure has the register information and the stack frame where the error occurs.Stack Frame and registers. Also attached the stack trace information here. Stack trace.
This might be a silly error but I am new to C++ and I don't have a very deep understanding of memory heaps and stacks. I tried enabling/disabling different settings in the debug mode as suggested in other answers which have worked for others but nothing works in my case. I was using the Visual studio professional version before and I could debug without this error. Now recently I had to change to visual studio community edition and since then I have this problem only when I try to set breakpoints in my code and debug it. Could this be the problem? Another thing I have noticed is, I am using Visual studio to compile various MATLAB functions as dll and use them to build customized dlls to run in TRNExe. Each of it works fine in the release mode, but in debug mode, everytime the same boost_log-vc110-mt-1_49.dll that breaks down and at the same memory register 0x7e37a348.
Can anyone please help me solving this error? I appreciate any ideas or suggestions regarding what the problem could be.
Please see the warning messages here. Could this be the source of the problem?Warning message
I have produced a minimal reproducible example below. It's still free lines of code. Another bottleneck for this is, I am trying to co-simulate between Trnsys and MATLAB. Trnsys has it's components written as Type DLLs (Fortran or C++) and it has a structure for this dll. I am trying to export the MATLAB function as dll, call it in the Trnsys type at required parts of the code and compile it as the DLL to run in Trnsys simulation studio. So, C++ links the MATLAB dll to the TRNDll which is the TRNSYS kernel library. I don't know how to do that as a minimal reproducible example. Anyways, I have compiled the MATLAB code to add two numbers and call it in the Type201.cpp(Respecting the TRNSYS structure) and compiled it into the DLL to run in TRNSYS studio. It works there. But when I try to run the Type201.cpp in debug mode, I still get the exact same error as before at the exact stack frame and the register with boost_log-vc110-mt-1_49.dll.
Type 201.cpp code is below
extern "C" __declspec(dllexport) void TYPE201(void)
{
double a;
double b;
double Timestep, Time, StartTime, StopTime;
int index, CurrentUnit, CurrentType;
mxArray* x_ptr ;
mxArray* y_ptr ;
mxArray* z_ptr = NULL;
double* output = NULL;
//Get the Global Trnsys Simulation Variables
Time = getSimulationTime();
Timestep = getSimulationTimeStep();
CurrentUnit = getCurrentUnit();
CurrentType = getCurrentType();
StartTime = getSimulationStartTime();
StopTime = getSimulationStopTime();
//Set the Version Number for This Type
if (getIsVersionSigningTime())
{
int v = 17;
setTypeVersion(&v);
return;
}
//Do All of the Last Call Manipulations Here
if (getIsLastCallofSimulation())
{
addlibTerminate();
mclTerminateApplication();
return;
}
//Perform Any "End of Timestep" Manipulations That May Be Required
//Do All of the "Very First Call of the Simulation Manipulations" Here
if (getIsFirstCallofSimulation())
{
//Tell the TRNSYS Engine How This Type Works
int npar = 2;
int nin = 2;
int nder = 0;
int nout = 1;
int mode = 1;
int staticStore = 0;
int dynamicStore =1;
setNumberofParameters(&npar);
setNumberofInputs(&nin);
setNumberofDerivatives(&nder);
setNumberofOutputs(&nout);
setIterationMode(&mode);
setNumberStoredVariables(&staticStore, &dynamicStore);
return;
}
//Do All of the "Start Time" Manipulations Here - There Are No Iterations at the Intial Time
if (getIsStartTime())
{
index = 1; a = getInputValue(&index);
index = 2; b = getInputValue(&index);
if (!mclInitializeApplication(NULL, 0))
{
//fprintf(stderr, "Could not initialize the application.\n");
exit(1);
}
if (!addlibInitialize())
{
//fprintf(stderr, "Could not initialize the library.\n");
exit(1);
}
//Read in the Values of the Inputs from the Input File
return;
}
if (getIsEndOfTimestep())
{
return;
}
//---------------------------------------------------------------------------------------------------------------------- -
//ReRead the Parameters if Another Unit of This Type Has Been Called Last
//Get the Current Inputs to the Model
index = 1;
a = getInputValue(&index);
index = 2;
b = getInputValue(&index);
int noutput = getNumberOfOutputs();
//Create an mxArray to input into mlfAdd
x_ptr = mxCreateDoubleMatrix(1, 1, mxREAL);
y_ptr = mxCreateDoubleMatrix(1, 1, mxREAL);
memcpy(mxGetPr(x_ptr), &a, 1 * sizeof(double));
memcpy(mxGetPr(y_ptr), &b, 1 * sizeof(double));
mlfAdd(1, &z_ptr, y_ptr, x_ptr);
output = mxGetPr(z_ptr);
index = 1;
setOutputValue(&index, output);
return;
}
Then this is the MATLAB function to add two numbers.
function [s] = add(a,b)
s = a+b;
end
I complied this into dll via command line using
mcc -B csharedlib:addlib add.m
This is likely because some std classes and boost classes have different layout for release and debug builds. In debug builds additional members are inserted via macros to enable better debugging experience like iterator debugging and such. So as documented by Microsoft it is undefined behavior to mix code linked to different c++ runtime. The matlab dll is build in release mode and linked to c++ release runtime so it shouldn't be used by code linked to debug libraries!
I am trying to compile and run the following program called test.cu:
#include <iostream>
#include <math.h>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
// Kernel function to add the elements of two arrays
__global__
void add(int n, float* x, float* y)
{
int index = threadIdx.x;
int stride = blockDim.x;
for (int i = index; i < n; i += stride)
y[i] = x[i] + y[i];
}
int main(void)
{
int N = 1 << 20;
float* x, * y;
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, N * sizeof(float));
cudaMallocManaged(&y, N * sizeof(float));
// initialize x and y arrays on the host
for (int i = 0; i < N; i++) {
x[i] = 2.0f;
y[i] = 1.0f;
}
// Run kernel on 1M elements on the GPU
add <<<1, 256>>> (N, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
// Check for errors (all values should be 3.0f)
for (int i = 0; i < 10; i++)
std::cout << y[i] << std::endl;
// Free memory
cudaFree(x);
cudaFree(y);
return 0;
}
I am using visual studio comunity 2019 and it marks the "add <<<1, 256>>> (N, x, y);" line as having an expected an expression error. I tried compiling it and somehow it compiles without mistakes, but when running the .exe file it outputs a bunch of "1" instead of the expected "3".
I also tried compiling using "nvcc test.cu", but initially it said "nvcc fatal : Cannot find compiler 'cl.exe' in PATH", so i added "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\Hostx64\x64" to path and now compiling with nvcc gives the same mistake as compiling with visual studio.
In both cases the program never enter the "add" function.
I am pretty sure the code is right and the problem has something to do with the installation, but i already tried reinstalling cuda toolkit and repairing MCVS, but it didn't work.
The kernel.cu exemple that appears when starting a new project with cuda in visual studio also didn't work. When running it outputted "No kernel image available for execution on the device".
How can is solve this?
nvcc version if that helps:
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:35_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.relgpu_drvr445TC445_37.28845127_0
Visual Studio provides IntelliSense for C++. In the C++ language, the proper parsing of angle brackets is troublesome. You've got < as less than and for templates, and << as shift. So, the fact is that the guys at NVIDIA choose the worst possible delimiter <<<>>>. This makes Intellisense difficult to work properly. The way to get full IntelliSense in CUDA is to switch from the Runtime API to the Driver API. The C++ is just C++, and the CUDA is still (sort of) C++, there is no <<<>>> badness for the language parsing to have to work around.
You could take a look at the difference between matrixMul and matrixMulDrv. The <<<>>> syntax is handled by the compiler essentially just spitting out code that calls the Driver API calls. You'll link to cuda.lib not cudart.lib, and may have to deal with a "mixed mode" program if you use CUDA-RT only libraries. You could refer to this link for more information.
Also, this link tells how to add Intellisense for CUDA in VS.
I am trying to implement a DFT using the fftw library (link to FFTW documentation).
All the libraries have been correctly linked, and the project builds just fine. However, the code doesn't run the moment any function from the fftw library is called.
#include <iostream>
#include <fftw3.h>
using namespace std;
int main() {
int vectorSize = 100;
cout << vectorSize << endl;
fftw_complex vec[vectorSize], vecOut[vectorSize];
for(int i = 0; i < vectorSize; i++) {
vec[i][0] = i;
vec[i][1] = 1;
}
// Call to function to create an FFT plan
fftw_plan plan = fftw_plan_dft_1d(vectorSize, vec, vecOut, FFTW_FORWARD, FFTW_ESTIMATE);
cout << "test" << endl;
return 0;
}
If I comment the line where the fftw_plan is instantiated, the code outputs 100 and "test" as expected. There are no issues in the build, as far as I can tell. I haven't really been able to find any post which describes a similar problem.
I am running this on eclipse, using MinGW and the 32 bit version of the pre-compiled binary available for windows (download link).
Any help would be really appreciated :)
Fftw requires input/output to be 16-byte aligned. When you declare the arrays on stack, this can't be guaranteed. So you need to call fftw_malloc or other function to allocate the arrays. Also, your code only creates the plan but doesn't execute it, thus no fft is carried out on the input data.
So I am making a game, an ASCII dungeon explorer and I had a plan to get the window size and scale the display the inventory beside the dungeon. I went on google to find a function that would return the window size so I can print the inv on the side... The parts that are commented out is the code that I found and so I tried to put it in my main.cpp just to try it out. I planned to use other functions that I found to get the max size of the window and to set the size. This code that I pasted into my code was giving me loads of errors when I went to run the game, somewhere around 180 errors from a header file called wingdi.h. I googled a bit more and found people changing some definitions in the project properties which I tried and it gave me about 130 errors from different headers included in . Someone said that one of the variables is already declared in windows.h and the person that asked that question should change it to something else so I changed csbi to my_csbi, both didn't work. Some people also said it was a problem with the code so I decided to just leave it for now, and go back to my old code and do something else for now (I was getting frustrated, lol). I commented it all out, hoping to come back to it later. When I tried to run my game with all of that code gone, it gave me the same errors. Do I have to reinstall Visual Studio 2015? Starting my code from scratch or changing some definitions?
#include <iostream>
#include <conio.h>
//#include <Windows.h>
#include "GameSystem.h"
using namespace std;
int main()
{
//int x = 0;
//int y = 0;
//CONSOLE_SCREEN_BUFFER_INFO my_csbi;
//GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &my_csbi);
//x = my_csbi.srWindow.Right -my_csbi.srWindow.Left + 1;
//y = my_csbi.srWindow.Bottom -my_csbi.srWindow.Top + 1;
//printf("columns: %d\n", x);
//printf("rows: %d\n", y);
GameSystem gameSystem("level1.txt");
gameSystem.playGame();
printf("Enter any key to exit...\n");
int tmp;
cin >> tmp;
return 0;
}
I am fairly new to programming so I'm sorry if my question is stupid or "simple" and if that offended you in some shape or form. :)
Thanks, Bulky.
EDIT: All the "solutions" others got didn't work for me and I decided to ask this because after I commented out all that code, my old code didn't even run (it was perfect before).
You never include <WinGdi.h> directly. You always include <Windows.h>, which pulls in the required headers implicitly.
The GDI functions (from <WinGdi.h>) aren't going to do you any good if you are writing a console application. The GetConsoleScreenBufferInfo function is probably what you want to be using, but it is not a GDI function. It is a kernel function. Again, although it is technically declared in <WinCon.h>, you are not supposed to include that header directly. Just include <Windows.h>.
The following code works fine for me
(other than the fact that std::cin doesn't actually work for "enter any key to exit..."):
#include <iostream>
#include <conio.h>
#include <Windows.h>
using namespace std;
int main()
{
CONSOLE_SCREEN_BUFFER_INFO csbi;
GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &csbi);
int x = csbi.srWindow.Right - csbi.srWindow.Left + 1;
int y = csbi.srWindow.Bottom - csbi.srWindow.Top + 1;
printf("columns: %d\n", x);
printf("rows: %d\n", y);
printf("Enter any key to exit...\n");
int tmp;
cin >> tmp;
return 0;
}
Output:
columns: 90
rows: 33
Enter any key to exit...
If it does not work for you, one of the following must be true:
1. The problem resides in "GameSystem.h", which you have not shown us.
(It is fine—in the future, please post code as text, not pictures.)
2. You have created the wrong type of project (recreate a new project using the Win32 Console Application template).
3. There is something wrong with your installation of Visual Studio (try reinstalling).
I'm running the following code, using Visual Studio 2008 SP1, on Windows Vista Business x64, quad core machine, 8gb ram.
If I build a release build, and run it from the command line, it reports 31ms. If I then start it from the IDE, using F5, it reports 23353ms.
Here are the times: (all Win32 builds)
DEBUG, command line: 421ms
DEBUG, from the IDE: 24,570ms
RELEASE, command line: 31ms
RELEASE, from IDE: 23,353ms
code:
#include <windows.h>
#include <iostream>
#include <set>
#include <algorithm>
using namespace std;
int runIntersectionTestAlgo()
{
set<int> set1;
set<int> set2;
set<int> intersection;
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.insert(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.insert(value);
}
set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end()));
return intersection.size();
}
int main(){
DWORD start = GetTickCount();
runIntersectionTestAlgo();
DWORD span = GetTickCount() - start;
std::cout << span << " milliseconds\n";
}
Running under a Microsoft debugger (windbg, kd, cdb, Visual Studio Debugger) by default forces Windows to use the debug heap instead of the default heap. On Windows 2000 and above, the default heap is the Low Fragmentation Heap, which is insanely good compared to the debug heap. You can query the kind of heap you are using with HeapQueryInformation.
To solve your particular problem, you can use one of the many options recommended in this KB article: Why the low fragmentation heap (LFH) mechanism may be disabled on some computers that are running Windows Server 2003, Windows XP, or Windows 2000
For Visual Studio, I prefer adding _NO_DEBUG_HEAP=1 to Project Properties->Configuration Properties->Debugging->Environment. That always does the trick for me.
Pressing pause while in the VS IDE shows that the additional time appears to be spent in malloc/free. This would lead me to believe the debugging support in MS's malloc and free implementation have additional logic if the debugger is attached. This would explain the discrepancy in times from the console and from the debugger.
EDIT: Confirmed by running with CTRL+F5 v. F5 (1047ms v. 9088ms on my machine)
So it sounds like this may just be what happens when one attaches the debugger. However, I just can't my head around the performance changing from 30ms to 23,000ms because of that, especially when the rest of my code seems to run just as fast whether or not the debugger is attached.