I am using VS2010 and Casablanca version 1.2 to integrate a REST interface into an existing C++ solution. If I create a new solution with only this block of code it works flawlessly. When I drop this code into an existing .cpp file it crashes on the create of the client object with a memcpy exception. I have done the updates the properties file to look at the correct version of Casablanca (100) and added my external dependencies as well the paths for the include and lib directories.
The block of code is:
try
{
http_client_config cimconfig;
cimconfig.set_validate_certificates(false);
http_client cimclient(L"https://dmaid52.corp.global/workplace",cimconfig);
cimclient .request(methods::GET).then([](http_response response) {
string_t theResponse = response.to_string();
}).wait();
}
catch( const http_exception &e )
{
printf("Exception status code %u returned. %s\n", e.error_code(), e.what());
}
When the cimclient is created I get the exception. If I remove the references to the config and only call http_client cimclient(L"https://dmaid52.corp.global/workplace") it seems to work OK but then it will throw the exception on the .request.
The call stack for the exception is below.
msvcr100d.dll!_VEC_memcpy(void * dst, void * src, int len) + 0x46 bytes
cpprest100d_1_2.dll!_wmemcpy() + 0x31 bytes
cpprest100d_1_2.dll!std::char_traits<wchar_t>::copy() + 0x2f bytes
cpprest100d_1_2.dll! std::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t> ::assign() + 0xb7 bytes
cpprest100d_1_2.dll! std::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t> ::basic_string< wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t> >() + 0x86 bytes
cpprest100d_1_2.dll!web::http::client::credentials::credentials() + 0x67 bytes
cpprest100d_1_2.dll!web::http::client::http_client_config::http_client_config() + 0x6c bytes
cpprest100d_1_2.dll! web::http::client::details::_http_client_communicator::_http_client_communicator() + 0x73 bytes cpprest100d_1_2.dll!web::http::client::details::winhttp_client::winhttp_client() + 0x52 bytes
cpprest100d_1_2.dll!std::tr1::_Ref_count_obj<web::http::client::details::winhttp_client>::_Ref_count_obj<web::http::client::details::winhttp_client><web::http::uri const &,web::http::client::http_client_config const &>() + 0xa6 bytes
cpprest100d_1_2.dll!std::tr1::make_shared<web::http::client::details::winhttp_client,web::http::uri const &,web::http::client::http_client_config const &>() + 0x8f bytes
cpprest100d_1_2.dll!web::http::client::http_network_handler::http_network_handler() + 0x70 bytes
cpprest100d_1_2.dll!std::tr1::_Ref_count_obj<web::http::client::http_network_handler>::_Ref_count_obj<web::http::client::http_network_handler><web::http::uri const &,web::http::client::http_client_config const &>() + 0xa3 bytes
cpprest100d_1_2.dll!std::tr1::make_shared<web::http::client::http_network_handler,web::http::uri const &,web::http::client::http_client_config const &>() + 0x8c bytes
cpprest100d_1_2.dll!web::http::client::http_client::build_pipeline() + 0x6f bytes
cpprest100d_1_2.dll!web::http::client::http_client::http_client() + 0x74 bytes
Hl7.exe!CChartSchedule::sendScheduleToCIM(QMsgSchedule * pMsg) Line 146 + 0x35 bytes C++
I have searched high and low to try and find a solution to this error with no avail. I think it may be a setting somewhere in the project settings but I have compared the stand alone project to my integrated project and can't come up with anything.
I've got myself into very similar situation.
In my case the problem was in different build settings for casablanca library and target executable. Casablanca was built in a static library with one set of defines and final project used other set of defines. This caused different sizes of internal structures in casablanca, which lead to very weird behavior and unexpected results.
Visual Studio gave me this error: Stack cookie instrumentation code detected a stack-based buffer overrun.
I found the source of a problem by checking sizeof(http_client_config) in my project and in casablanca project.
Resolution is simple - set same defines in both projects.
Related
I'm implementing sha512 in OpenCL technology. I have simple definition of kernel function
__kernel void _sha512(__global char *message, const uint length, __global char *hash);
On host I have implemented and successfully tested implementation of sha512 algorithm.
I have a problem with copy data from message array to temporary variable called character.
char character = message[i];
Where i is a loop variable - in range from 0 to message's size.
When I tried to run my program there I got this errors
0x00007FFD9FA03D54 (0x0000000010CD0F88 0x0000000010CD0F88 0x0000000010BAEE88 0x000000001A2942A0), nvvmCompilerProperty() + 0x26174 bytes(s)
...
0x00007FFDDFA70D51 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000), RtlUserThreadStart() + 0x21 bytes(s)
0x00007FFDDFA70D51 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000), RtlUserThreadStart() + 0x21 bytes(s)
I readed about async_work_group_copy() but I can't understand how to use it - in docs I can't found any example code.
I have tried with char character = (__private char) message[i]; but it's not working too.
I don't understand how to pass last parameter into async_work_group_copy() and how to use it to copy data from __global memory into __private memory.
OpenCL by default does not allow single-byte access in kernels: memory access needs to be in multiples of 4 bytes, aligned to 4-byte boundaries. If your implementation supports it, you can enable byte-wise memory accesses. This involves the cl_khr_byte_addressable_store extension, which you need to check for and explicitly enable in your kernel source. Give that a try and see if it solves your problem.
To use async_work_group_copy, try something like this:
#define LOCAL_MESSAGE_SIZE 64 // or some other suitable size for your workgroup
__local char local_message[LOCAL_MESSAGE_SIZE];
event_t local_message_ready = async_work_group_copy(local_message, message, LOCAL_MESSAGE_SIZE, 0);
// ...
// Just before you need to use local_message's content:
wait_group_events(1, &local_message_ready);
// Use local_message from here onwards
Note that async_work_group_copy is not required; you can access global memory directly. Which will be faster depends on your kernel, OpenCL implementation, and hardware.
Another option (the only option if your implementation/hardware do not support cl_khr_byte_addressable_store) is to fetch your data in chunks of at least 4 bytes. Declare your message as a __global uint* and unpack the bytes by shifting and masking:
uint word = message[i];
char byte0 = (word & 0xff);
char byte1 = ((word >> 8) & 0xff);
char byte2 = ((word >> 16) & 0xff);
char byte3 = ((word >> 24) & 0xff);
// use byte0..byte3 in your algorithm
Depending on implementation, hardware, etc. you may find this to be faster than bytewise access. (You may want to check if you need to reverse the unpacking by reading the CL_DEVICE_ENDIAN_LITTLE property using clGetDeviceInfo if you're not sure if all your deployment platforms will be little-endian.)
I have an C++ application that has a main thread and a Poco::Timer to trigger a callback which writes to a file using Poco::FileOutputStream:
FileOutputStream file("test.txt", ios::binary); <-- *Access violation reading location here*
file.write(reinterpret_cast<char *>(&data), sizeof(data));
file.close();
The code always failed at the first line, here is the call stack:
testProject.exe!std::ctype::widen(char _Byte=' ') Line 1716 + 0xf bytes C++
testProject.exe!std::basic_ios >::widen(char _Byte=' ') Line 126 C++
testProject.exe!std::basic_ios >::init(std::basic_streambuf > * _Strbuf=0x038ef700, bool _Isstd=false) Line 135 + 0xa bytes C++
testProject.exe!std::basic_ostream >::basic_ostream >(std::basic_streambuf > * _Strbuf=0x038ef700, bool _Isstd=false) Line 54 C++
testProject.exe!Poco::FileOutputStream::FileOutputStream(const std::basic_string,std::allocator > & path="c:\Projects\TestProject\test.txt", int mode=32) Line 93 + 0xa3 bytes C++
testProject.exe!OPC_Server::OnTimer(Poco::Timer & timer={...}) Line 3656 + 0x13 bytes C++
testProject.exe!Poco::TimerCallback::invoke(Poco::Timer & timer={...}) Line 212 + 0x14 bytes C++
testProject.exe!Poco::Timer::run() Line 197 + 0x19 bytes C++
testProject.exe!Poco::PooledThread::run() Line 200 + 0x15 bytes C++
testProject.exe!Poco::`anonymous namespace'::RunnableHolder::run() Line 57 + 0x17 bytes C++
testProject.exe!Poco::ThreadImpl::runnableEntry(void * pThread=0x00db6afc) Line 207 + 0x20 bytes C++
testProject.exe!_callthreadstartex() Line 348 + 0xf bytes C
testProject.exe!_threadstartex(void * ptd=0x00db6d00) Line 331 C
Tracing into the stack, the timer thread seemed having problem reading the initialization _Byte at the top of the call stack in xlocale internal header:
_Elem __CLR_OR_THIS_CALL widen(char _Byte) const
{ // widen char
return (do_widen(_Byte)); <-- failed: Access violation reading location
}
Second entry in the stack in ios standard header:
_Elem __CLR_OR_THIS_CALL widen(char _Byte) const
{ // convert _Byte to character using imbued locale
const _Ctype& _Ctype_fac = _USE(getloc(), _Ctype);
return (_Ctype_fac.widen(_Byte)); <-- call the top of the stack
}
Third entry in the stack in ios standard header:
protected:
void __CLR_OR_THIS_CALL init(_Mysb *_Strbuf = 0,
bool _Isstd = false)
{ // initialize with stream buffer pointer
_Init(); // initialize ios_base
_Mystrbuf = _Strbuf;
_Tiestr = 0;
_Fillch = widen(' '); <-- call the second entry
But very strangely, the same code runs fine without any error when being used on the main thread.
Is there any permission settings that I need to set for the Poco::Timer to be able to function properly? Or am I missing something very obvious? Thanks for any help.
EDIT: ----------------------
Poco version: 1.7.3
Platform: windows
It turns out that the application exits immediately after the timer is created, but the exit is not cleanly done so it appears that the app is still running and the timer is still ticking, when actually some of the resource has already been released, which causes the error.
MS's _tmain() does something extra than main() apparently.
Sorry it is not _tmain(), but _tmainCRTStartup that is calling _tmain(). When _tmain() exits, other clean up code is run, my project isn't terminated somehow and the application appears still "running".
When I try to run this:
#include <tchar.h>
#include <fstream>
int _tmain(int argc, TCHAR *argv[])
{
std::basic_ifstream<TCHAR> file("TestInput.txt");
file.get();
}
I get an AccessViolationException with this stack trace:
ntdll.dll!_RtlpWaitOnCriticalSection#8() + 0xae bytes
ntdll.dll!_RtlpEnterCriticalSectionContended#4() + 0xa1 bytes
ntdll.dll!_RtlEnterCriticalSection#4() - 0x1f885 bytes
msvcr120.dll!__lock_file() + 0x2ce45 bytes
[Managed to Native Transition]
MyProject.exe!std::basic_filebuf<char,std::char_traits<char> >::_Lock() Line 355 C++
msvcp120d.dll!std::basic_istream<char,std::char_traits<char> >::_Sentry_base::_Sentry_base() + 0x55 bytes
msvcp120d.dll!std::basic_istream<char,std::char_traits<char> >::sentry::sentry() + 0x32 bytes
msvcp120d.dll!std::basic_istream<char,std::char_traits<char> >::get() + 0x5c bytes
[Managed to Native Transition]
MyProject.exe!wmain(int argc = 0x2, wchar_t** argv = 0x054AA3F8) [line # removed] C++
MyProject.exe!__tmainCRTStartup() [line # removed] C
[Managed to Native Transition]
mscoreei.dll!__CorExeMain#0() + 0x71 bytes
mscoree.dll!_ShellShim__CorExeMain#0() + 0x227 bytes
mscoree.dll!__CorExeMain_Exported#0() + 0x8 bytes
ntdll.dll!___RtlUserThreadStart#8() + 0x27 bytes
ntdll.dll!__RtlUserThreadStart#8() + 0x1b bytes
Why does this happen, and how can I avoid it when trying to read a file?
It was because I was compiling with the _DEBUG macro (left over from before converting from native).
Removing it fixed the problem.
It sounds like you have macro's doing unexpected things. When you need to figure out what happened in Visual C++, have it dump the preprocessor output. Details on doing that are documented here.
I have been using PDH (Performance Data Handler) to collect perfmon data about the machine my process is running on. I have had no trouble with PDH objects Processor Information and Memory but when I try to use PhysicalDisk my program hangs.
I reduced the code to this test case.
void
third()
{
char path[ PDH_MAX_COUNTER_PATH ];
DWORD size = 0;
PDH_STATUS status;
strcpy( path, "\\PhysicalDisk(1 E:)\\% Disk Time" );
// hangs
status = PdhExpandWildCardPath( NULL, path, 0, & size, PDH_NOEXPANDCOUNTERS );
HANDLE query;
PDH_HCOUNTER hCounter;
status = PdhOpenQuery( NULL, 0, & query );
// hangs
status = PdhAddCounter( query, path, 0, & hCounter );
}
When I run this code by itself, it works fine. When I run it in my larger program it hangs but only for PhysicalDisk. My larger program monitors a dozen PDH counters with no problems but when I add any PhysicalDisk counter, then it hangs. The counters are defined in an array, so the code is the same for all counters.
When it hangs, this is the stack back trace
ntdll.dll!_ZwDelayExecution#8() + 0x15 bytes
ntdll.dll!_ZwDelayExecution#8() + 0x15 bytes
KernelBase.dll!_Sleep#4() + 0xf bytes
perfdisk.dll!_OpenDiskObject#4() + 0x105 bytes
advapi32.dll!_OpenExtObjectLibrary#4() + 0x1f9 bytes
advapi32.dll!_QueryExtensibleData#4() - 0x250f bytes
advapi32.dll!_PerfRegQueryValue#28() + 0x26d bytes
kernel32.dll!_LocalBaseRegQueryValue#24() + 0x2754e bytes
kernel32.dll!_RegQueryValueExW#24() + 0xae bytes
pdh.dll!_GetSystemPerfData#16() + 0x92 bytes
pdh.dll!_GetObjectId#12() + 0xdb bytes
pdh.dll!_PdhiExpandWildcardPath#24() + 0x38b bytes
pdh.dll!_PdhExpandWildCardPathHA#20() + 0xcb bytes
pdh.dll!_PdhExpandWildCardPathA#20() + 0x10a bytes
> dbb30.dll!third() Line 2472 + 0x16 bytes C++
dbb30.dll!dbbProfileReport::dbbProfileReport() Line 2490 C++
dbb30.dll!`dynamic initializer for 'dbbProfileReportObject''() Line 2367 + 0xd bytes C++
msvcr90d.dll!_initterm(void (void)* * pfbegin=0x0121d380, void (void)* * pfend=0x0121fddc) Line 903 C
dbb30.dll!_CRT_INIT(void * hDllHandle=0x00ae0000, unsigned long dwReason=1, void * lpreserved=0x0018fd24) Line 318 + 0xf bytes C
dbb30.dll!__DllMainCRTStartup(void * hDllHandle=0x00ae0000, unsigned long dwReason=1, void * lpreserved=0x0018fd24) Line 540 + 0x11 bytes C
dbb30.dll!_DllMainCRTStartup(void * hDllHandle=0x00ae0000, unsigned long dwReason=1, void * lpreserved=0x0018fd24) Line 510 + 0x11 bytes C
ntdll.dll!_LdrpCallInitRoutine#16() + 0x14 bytes
ntdll.dll!_LdrpRunInitializeRoutines#4() - 0x352 bytes
ntdll.dll!_LdrpInitializeProcess#8() - 0x765 bytes
ntdll.dll!__LdrpInitialize#8() + 0xb4f9 bytes
ntdll.dll!_LdrInitializeThunk#8() + 0x10 bytes
The PDH code starts 6 threads in my process, one of them is in a function _PerfdiskPnpNotification which seems to be related. When I look at assembly code, the main thread is sleeping waiting for this second thread to set a flag.
I have tried running with Admin permissions, but no change. Googling for _PerfdiskPnpNotification finds it sometimes tries to open a pop up window. I tried running the larger code from a GUI app, but it hangs too. I tried the larger code on two machines (both Windows 7) but no luck. The larger program calls the PDH code from a constructor for a global object, I tried the small test case from a ctor on a global, but it worked.
I tried lodctr /r to rebuild the registry, but no joy.
Compiling with MSVC 2005 and MSVC 2008 Express.
I've been using libxml2 push parsing (SAX) to parse an incoming XML stream, this works well first time but crashes on the second attempt every time, my code looks like this:
xmlSAXHandler saxHandler;
memset(&saxHandler, 0, sizeof(m_SaxHandler));
xmlSAXVersion(&saxHandler, 2);
saxHandler.initialized = XML_SAX2_MAGIC; // so we do this to force parsing as SAX2.
saxHandler.startElementNs = &startElementNs;
saxHandler.endElementNs = &endElementNs;
saxHandler.warning = &warning;
saxHandler.error = &error;
saxHandler.characters = &characters;
xmlParserCtxtPtr pSaxCtx = xmlCreatePushParserCtxt(&m_SaxHandler, this, 0, 0, 0);
I then feed in the XML stream using xmlParseChunk() and use the callbacks to process the data, once parsing is complete, I call xmlFreeParserCtxt(pSaxCtx) to free the context. As I mentioned, this all works perfectly on the first set of data but when the code is run again I get an access violation, the stack trace is:
ntdll.dll!_RtlpWaitOnCriticalSection#8() + 0x99 bytes
ntdll.dll!_RtlEnterCriticalSection#4() + 0x168d8 bytes
ntdll.dll!_RtlpWaitOnCriticalSection#8() + 0x99 bytes
ntdll.dll!_RtlEnterCriticalSection#4() + 0x168d8 bytes
libxml2.dll!xmlGetGlobalState() Line 716 C
libxml2.dll!__xmlDefaultBufferSize() Line 814 + 0x5 bytes C
libxml2.dll!xmlAllocParserInputBuffer(xmlCharEncoding enc) Line 2281 + 0x5 bytes C
libxml2.dll!xmlCreatePushParserCtxt(_xmlSAXHandler * sax, void * user_data, const char * chunk, int size, const char * filename) Line 11695 + 0x9 bytes C
TestApp.exe!XMLProcessor::XMLProcessor(const wchar_t * szHost=0x00d3d80c, const wchar_t * szUri=0x00d3db40, bool secure=false) Line 16 + 0x19 bytes C++
TestApp.exe!WorkerThread::ThreadProc(void * lpParameter=0x00a351c0) Line 34 + 0x15 bytes C++
kernel32.dll!#BaseThreadInitThunk#12() + 0x12 bytes
ntdll.dll!___RtlUserThreadStart#8() + 0x27 bytes
ntdll.dll!__RtlUserThreadStart#8() + 0x1b bytes
It seems to be trying to lock a critical section which is either non-existant or corrupted but I cannot figure how/why it works first time and not second.
Any ideas?
Thanks,
J
Are the two calls in different threads?
Have you called the xmlInitParser function to initialize the library. A missing call to xmlInitParser will produce a call stack like yours in multi-threaded applications.