`DebugActiveProcessStop' undeclared (first use this function) - c++

The question is same as this one, however, the solution doesn't work for me.
According to the DebugActiveProcessStop function documentation, the minimum supported client is Windows XP. I am using Windows 7.
// #ifdef _WIN32_WINNT
// #undef _WIN32_WINNT
// #endif
#define NTDDI_VERSION 0x05010000
// #define _WIN32_WINNT 0x0502
#include <iostream>
#include <windows.h>
using namespace std;
class CppDBG
{
...
public:
BOOL detach (void);
...
};
...
BOOL CppDBG :: detach (void)
{
if (DebugActiveProcessStop(pid)) {
cout << "[+] Finished debugging. Exiting...";
return true;
}
else {
cout << "[-] Error" << endl;
return false;
}
}
int main()
{
CppDBG dbg;
...
dbg.detach();
return 0;
}

I guess that your IDE has an old version of the Windows API installed.
The error you get is a compiler error telling you that the compiler does not know what DebugActiveProcessStop is. For the compiler this can be anything (e.g. variable, constant, ...). This error has nothing to do with the version of Windows your system is running with.
To fix this try downloading the Windows SDK from Microsoft and tell your compiler to use it (change include directories, library paths, ...). This heavily depends on the IDE you are using. But the internet should provide enough help.

Related

C++ Unix and Windows support

I want to make my project available for Linux.
Therefore, I need to substitute functions from windows.h library.
In my terminal.cpp I highlight error messages in red. This step I only want to do in windows OS (ANSI don't work for my console, so i don't have a cross-platform solution for this).
On windows it works, but on Linux i get the following error:
/usr/bin/ld: /tmp/ccvTgiE8.o: in function `SetConsoleTextAttribute(int, int)':
Terminal.cpp:(.text+0x0): multiple definition of `SetConsoleTextAttribute(int, int)'; /tmp/cclUawx7.o:main.cpp:(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
In my main.cpp file I do nothing but include terminal.h and run it.
terminal.cpp
if (OS_Windows)
{
SetConsoleTextAttribute(dependency.hConsole, 4);
cout << "Error: " << e.getMessage() << endl;
SetConsoleTextAttribute(dependency.hConsole, 7);
}
else
{
cout << "Error: " << e.getMessage() << endl;
}
terminal.h
#ifdef _WIN32
#define OS_Windows 1
#include "WindowsDependency.h"
#else
#define OS_Windows 0
#include "UnixDependency.h"
#endif
WindowsDependency.h
#pragma once
#include <Windows.h>
class Dependency
{
public:
HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
};
UnixDependency.h
#pragma once
class Dependency
{
public:
int hConsole = 0;
};
void SetConsoleTextAttribute(int hConsole, int second) {};
Header files are supposed to contain declarations. By adding the {} you made a definition and C++ does not allow multiple definitions of the same function with identical signatures.
Either remove the {} and provide a definition in a separately-compiled .cpp file, OR by marking the function as inline.

boost, coroutine2 (1.63.0): throwing exception crashes visual studio on 32bit windows

In my application I'm using coroutine2 to generate some objects which I have to decode from a stream. These objects are generated using coroutines. My problem is that as soon as I reach the end of the stream and would theoretically throw std::ios_base::failure my application crashes under certain conditions.
The function providing this feature is implemented in C++, exported as a C function and called from C#. This all happens on a 32bit process on Windows 10 x64. Unfortunately it only reliably crashes when I start my test from C# in debugging mode WITHOUT the native debugger attached. As soon as I attach the native debugger everything works like expected.
Here is a small test application to reproduce this issue:
Api.h
#pragma once
extern "C" __declspec(dllexport) int __cdecl test();
Api.cpp
#include <iostream>
#include <vector>
#include <sstream>
#include "Api.h"
#define BOOST_COROUTINES2_SOURCE
#include <boost/coroutine2/coroutine.hpp>
int test()
{
using coro_t = boost::coroutines2::coroutine<bool>;
coro_t::pull_type source([](coro_t::push_type& yield) {
std::vector<char> buffer(200300, 0);
std::stringstream stream;
stream.write(buffer.data(), buffer.size());
stream.exceptions(std::ios_base::eofbit | std::ios_base::badbit | std::ios_base::failbit);
try {
std::vector<char> dest(100100, 0);
while (stream.good() && !stream.eof()) {
stream.read(&dest[0], dest.size());
std::cerr << "CORO: read: " << stream.gcount() << std::endl;
}
}
catch (const std::exception& ex) {
std::cerr << "CORO: caught ex: " << ex.what() << std::endl;
}
catch (...) {
std::cerr << "CORO: caught unknown exception." << std::endl;
}
});
std::cout << "SUCCESS" << std::endl;
return 0;
}
C#:
using System;
using System.Runtime.InteropServices;
namespace CoroutinesTest
{
class Program
{
[DllImport("Api.dll", EntryPoint = "test", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
internal static extern Int32 test();
static void Main(string[] args)
{
test();
Console.WriteLine("SUCCESS");
}
}
}
Some details:
We are using Visual Studio 2015 14 and dynamically link the c++ runtime.
The test library statically links Boost 1.63.0.
We also tried to reproduce this behaviour with calling the functionallity directly from c++ and from python. Both tests have not been successful so far.
If you start the c# code with CTRL F5 (meaning without the .net debugger) everything will also be fine. Only if you start it with F5 (meaning the .NET Debugger attached) the visual studio instance will crash. Also be sure not to enable the native debugger!
Note: If we don't use the exceptions in the stream, everything seams to be fine as well. Unfortunately the code decoding my objects makes use of them and therefore I cannot avoid this.
It would be amazing if you had some additional hints on what might go wrong here or a solution. I'm not entirely sure if this is a boost bug, could also be the c# debugger interfering with boost-context.
Thanks in advance! Best Regards, Michael
I realize this question is old but I just finished reading a line in the docs that seemed pertinent:
Windows using fcontext_t: turn off global program optimization (/GL) and change /EHsc (compiler assumes that functions declared as extern "C" never throw a C++ exception) to /EHs (tells compiler assumes that functions declared as extern "C" may throw an exception).
This is just a guess but in your coroutine I think you are supposed to push a boolean to your sink (named as yield in your code) and the code is not doing it.

Switch off Memory Leak Detection in boost.Test

I'm currently using boost.Test and I'm wondering if it might be possible to switch off the Memory Leak Detection, if one compiles in DEBUG Mode.
I don't want to use the command line parameter switch --detect_memory_leak=0. I'm looking for a kind of #define parameter, that switches off the memory leak detection feature in DEBUG mode.
It would be also suitable for me to switch off the memory detection feature by defining a certain compiler switch. I'm currently using Microsoft Visual Studio 2010.
#define BOOST_TEST_DETECT_MEMORY_LEAK 0 // Preprocesser switch I'm looking for!
#define BOOST_TEST_MODULE MyUnitTest
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_SUITE(MySuite);
BOOST_AUTO_TEST_CASE(MyUnitTest) {
/// Following code has a memory leak
/// ....
}
BOOST_AUTO_TEST_SUITE_END()
Just found out that possibly the best way to turn off the detection of memory leaks is to include the following code snippet into one's tests.
#include <boost/test/debug.hpp>
struct GlobalFixture {
GlobalFixture() {
boost::debug::detect_memory_leaks(false);
}
~GlobalFixture() { }
};
BOOST_GLOBAL_FIXTURE(GlobalFixture);
Still, I was not able to switch off and to switch on the detection of memory leaks for single tests.
You can directly set the environment variable BOOST_TEST_DETECT_MEMORY_LEAK to 0 or use putenv :
#include <cstdlib>
//...
BOOST_AUTO_TEST_CASE(MyUnitTest) {
putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
//...
}
Edit
As you're using visual studio 2010, you can try _putenv or _wputenv :
#include <stdlib.h>
//...
BOOST_AUTO_TEST_CASE(MyUnitTest) {
_putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
//...
}
Otherwise, I found a function detect_memory_leaks in the Boost documentation but it seems to be only available on recent boost version.
_CrtSetDbgFlag(0);
is the only thing that mostly worked for me. Leak messages came out, but not enough to make me wait.
Here's some code detailing everything I tried:
struct GlobalFixture
{
GlobalFixture()
{
// This doesn't seem to do anything
// boost::debug::detect_memory_leaks(false);
// This either
//_putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
// This total hack also does nothing
// using namespace boost::unit_test::runtime_config;
// const_cast<boost::runtime::arguments_store&>(argument_store()).set(btrt_detect_mem_leaks, 0);
// This gets rid of most of the messages
_CrtSetDbgFlag(0);
}
};
Why not use the _DEBUG macro ?
#ifdef _DEBUG
#define BOOST_TEST_DETECT_MEMORY_LEAK 0
#endif

What is the correct way to include GLEW in a Mac OS X .framework?

Probably a very simple problem, but it's had me stumped.
I've got an SDL/OpenGL/GLEW based static library that compiles (gcc/g++) and links fine on Windows. On OS X, the same codebase fails to compile claiming that it can't find the declaration of GL_NUM_EXTENSIONS and ::glGetStringi() - which is since I've thrown GLEW in the mix (with only SDL and OpenGL, it builds fine on OS X too).
// globals.h
#include <glew.h>
#include <SDL/SDL.h>
// graphics.h
#include "globals.h"
bool HasGLExtension(const char* pName);
// graphics.cpp
#include <string>
#include "graphics.cpp"
bool HasGLExtension(const char* pName)
{
GLint numExtensions;
::glGetIntegerv(GL_NUM_EXTENSIONS, &numExtensions); // error
for (int i = 0; i < numExtensions; ++i)
{
if (strcmp(pName, (char*)::glGetStringi(GL_EXTENSIONS, i)) == 0) // error
{
return true;
}
}
return false;
}
the dependent libraries, built as frameworks, are situated at /Library/Frameworks.
the -DGLEW_STATIC -DSDL_NO_GLEXT compile flags are used (as needed on Windows - the problem persists even if I remove them).
even auto-completion confirms that the location of glew.h exists (then again, of course, that's not the error I'm getting -- it's the symbols).
including SDL/SDL_opengl.h will just result in conflicting declarations.
the problematic definitions are present in glew.h
What's the obvious that I'm missing?
Don't do it. For something that 1, is easy to set up and 2, works out of the box and 3, has little to no encumbrance in its licensing, use glLoadGen.

GetLongPathName Undeclared

When I try to compile my code with the function GetLongPathName(), the compiler tells me that the function is undeclared.
I have already read the MSDN documentation located # http://msdn.microsoft.com/en-us/library/aa364980%28VS.85%29.aspx. But, even though I included those header files, I am still getting the undeclared function error. Which header file(s) am I supposed to include when using the function?
#include <Windows.h>
#include <WinBase.h>
#define DLLEXPORT extern "C" __declspec(dllexport)
DLLEXPORT char* file_get_long(char* path_original)
{
long length = 0;
TCHAR* buffer = NULL;
if(!path_original)
{
return "-10";
}
length = GetLongPathName(path_original, NULL, 0);
if(length == 0)
{
return "-10";
}
buffer = new TCHAR[length];
length = GetLongPathName(path_original, buffer, length);
if(length == 0)
{
return "-10";
}
return buffer;
}
And, if it makes a difference, I am currently compiling using Dev-C++ on a Windows Vista 64-bit.
Dev-C++'s support of the Windows API is not complete. Actually, it's not even close. It is entirely likely that the GetLongPathName function is not declared in winbase.h that is shipped with that compiler (Actually an old version of MinGW).
You can use the free compiler which ships with the Windows SDK to work around the problem. It is the same compiler that ships with Visual Studio, though it is commandline only.
You can also use Visual C++ Express Edition, which is free and provides features similar to DevCPP.