How can I use more memory? - c++

I got the following error:
std::bad_array_new_length at memory location 0x00B3F9D8
It is because of this part:
U32 rayCountSqrt = (U32)(std::ceil(objWidthLambda * (T)(observation.rayPerLam_) + 1.0)) + 1;
rayPool.Initialize( rayCountSqrt);
And if we look at the Initilialize method
void Initialize( const U32& rayCountSqrt )
{
Reset();
rayCountSqrt_ = rayCountSqrt;
rayCount_ = rayCountSqrt * rayCountSqrt;
rayArea_ = 0;
rayTubeArray_.reset( new RayTube< T >[ rayCount_ ], []( RayTube< T >* ptr ){ delete[] ptr; } );
init_ = true;
}
Problem is probably located here, because when I pass for example 10 to Initialize I get no errors. My rayCountSqrt is 86354 and I don't know if it's too big to handle.
How can I fix this? I really need to pass large numbers to this function in order to get good approximations.

Related

Memory Pools and <Unable to read memory>

I recently switched my project to using a linear memory allocator that I wrote myself (for learning). When I initialize the allocator, I pass it a pointer to a block of memory that was VirtualAlloc-ed beforehand. Before writing the allocator, I was using this block directly just fine.
In my test case, I am using the allocator to allocate memory for a Player* in that initial big block of memory. To make sure every was working, I tried accessing the block of memory directly as I had before to make sure the values were changing according to my expectations. That's when I hit a memory access error. Using the VS debugger/watch window, I have a reasonable idea of what is happening and when, but I am hoping to get some help with the question of why. I'll lay out the relevant pieces of code below.
Virtual Alloc call, later referred to by memory->transientStorage
win32_State.gameMemoryBlock = VirtualAlloc(baseAddress, (size_t)win32_State.totalSize,
MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE);
Allocator definition
struct LinearAllocator {
void* currentPos;
size_t totalSize;
void* startPos;
size_t usedMemory;
size_t numAllocations;
LinearAllocator();
LinearAllocator(size_t size, void* start);
LinearAllocator(LinearAllocator&) = delete;
~LinearAllocator();
void* allocate(size_t size, uint8 alignment);
void clear();
};
Player and Vec2f definitions
struct Player {
Vec2f pos;
bool32 isFiring;
real32 timeLastFiredMS;
};
union Vec2f {
struct {
real32 x, y;
};
real32 v[2];
};
Relevant Allocator Implementation Details
void* LinearAllocator::allocate(size_t size, uint8_t alignment) {
if (size == 0 || !isPowerOfTwo(alignment)) {
return nullptr;
}
uint8_t adjustment = alignForwardAdjustment(currentPos, alignment);
if (usedMemory + adjustment + size > totalSize) {
return nullptr;
}
uint8_t* alignedAddress = (uint8*)currentPos + adjustment;
currentPos = (void*)(alignedAddress + size);
usedMemory += size + adjustment;
numAllocations++;
return (void*)alignedAddress;
}
inline uint8_t alignForwardAdjustment(void* address, uint8_t alignment) {
uint8_t adjustment = alignment - ( (size_t)address & (size_t)(alignment - 1));
if (adjustment == alignment) {
return 0; // already aligned
}
return adjustment;
}
inline int32_t isPowerOfTwo(size_t value) {
return value != 0 && (value & (value - 1)) == 0;
}
Initialization code where I attempt to use allocator
// **Can write to memory fine here**
((float*)memory->transientStorage)[0] = 4.f;
size_t simulationAllocationSize = memory->transientStorageSize / 2 / sizeof(real32);
simulationMemory = LinearAllocator(simulationAllocationSize, &memory->transientStorage + (uint8_t)0);
for (int i = 0; i < MAX_PLAYERS; i++) {
Player* p = (Player*)simulationMemory.allocate(sizeof(Player), 4);
// **also works here**
((real32*)memory->transientStorage)[0] = 3.f;
p->pos.x = 0.f; // **after this line, I got the unable to read memory error**
p->pos.y = 0.f;
p->isFiring = false;
p->timeLastFiredMS = 0.f;
// **can't write **
((real32*)memory->transientStorage)[0] = 1.f;
}
// **also can't write**
((real32*)memory->transientStorage)[0] = 2.f;
real32 test = ((real32*)memory->transientStorage)[0];
My running assumption is that I'm missing something obvious. But the only clue I have to go off of is that it changed after setting a value in the Player struct. Any help here would be greatly appreciated!
Looks like this is your problem:
simulationMemory = LinearAllocator(simulationAllocationSize,
&memory->transientStorage + (uint8_t)0);
There's a stray & operator, causing you to allocate memory not from the allocated memory block that memory->transientStorage points to but from wherever memory itself lives.
This is turns causes the write to p->pos.x to overwrite the value of transientStorage.
The call to LinearAllocator should be just
simulationMemory = LinearAllocator(simulationAllocationSize,
memory->transientStorage + (uint8_t)0);

A function to display contents of 1 or 2 dimensional array of any type

I needed to be able to display the contents of my various arrays (for debugging purposes at this point), and decided to write a function to help me with that. This is what I came up with.
The goal is to be able to display any type of incoming array (int, double, etc).
Because I never had any official programming training, I am wondering if what I have is too "inelegant" and could be improved by doing something obvious to a good computer science person, but not so to a layperson.
int
DisplayArrayInDebugWindow(
void** incoming_array,
char* array_type_str,
int array_last_index_dim_size,
int array_terminator,
HWND handle_to_display_window,
wchar_t* optional_array_name )
{
wchar_t message_bufferw[1000];
message_bufferw[0] = L'\0';
wchar_t temp_buffer[400];
if ( array_last_index_dim_size == 0 ) { array_last_index_dim_size = 1; }
// ----------------------------------------------------------------------------
// Processing for "int" type array
// ----------------------------------------------------------------------------
if ( 0 == (strcmp( array_type_str, "int" )) )
{
int j = 0;
swprintf( temp_buffer, L"%s\r\n", optional_array_name );
wcscat( message_bufferw, temp_buffer );
for ( int i = 0; ((int)(*((int*)( (int)incoming_array + i * (int)sizeof(int) * array_last_index_dim_size + j * (int)sizeof(int))))) != array_terminator; i++ )
{
swprintf( temp_buffer, L"%02i:\t", i );
wcscat( message_bufferw, temp_buffer );
for ( j; j < last_array_dim_size; j++ )
{
swprintf( temp_buffer, L"%i\t", ((int)(*((int*)( (int)incoming_array + i * (int)sizeof(int) * array_last_index_dim_size + j * (int)sizeof(int) )))) ); //
wcscat( message_bufferw, temp_buffer );
}
wcscat( message_bufferw, L"\r\n" );
// --------------------------------------------------------------------
// reset j to 0 each time
// --------------------------------------------------------------------
j = 0;
}
swprintf( temp_buffer, L"\nEnd of Array\n" );
wcscat( message_bufferw, temp_buffer );
SetWindowText( handle_to_display_window, message_bufferw );
}
return 0;
}
NB: When I pass in "incoming array", I type cast it as (void**) obviously.
When the data type changes but the algorithm doesn't, it's time to consider using templates.
template<class Element_Type>
print_array(Element_Type const * p_begin,
Element_Type const * p_end)
{
while (p_begin != p_end)
{
cout << *p_begin;
++p_begin;
}
}
The conversion from single dimension to multiple dimension is left as an exercise to the OP and readers.
Edit 1: Another alternative
At some point, the output function will need information about how to print the information you gave it.
One option is for you to write your own printf function that has format specifiers for the data you send it.
While another option is to pass a pointer to a function that prints the data.
The fundamental issue is that the output function needs to know how to print the data.
For C++, I suggest overriding operator<< in the class / structure. Since the class/structure knows the data, it can easily know how to print the data.

How do I cast a void pointer to a int[3]?

I need to call a 3rd party library and pass in an int[3] as a void * like this [works]:
int pattern[3] = {2,4,10};
if ( OSTaskCreate( BlinkLED,
( void * ) pattern,
( void * ) &BlinkTaskStack[USER_TASK_STK_SIZE],
( void * ) BlinkTaskStack,
MAIN_PRIO - 1 ) != OS_NO_ERR )
{
iprintf( "*** Error creating blink task\r\n" );
}
But now I need to parse a string to get the pattern array and I can't seem to get it right.
First I pass the string into the parser and get back the array:
int (&ParseBlinkOnCommand(char rxbuffer[3]))[3]
{
// Code parses rxbuffer and creates the 3 ints needed
int pattern[3] = {repeats, onTicks, offTicks};
return pattern;
}
Then I try to pass it to the OSTaskCreate just like I did before:
int pattern2[3] = ParseBlinkOnCommand(rxbuffer);
if ( OSTaskCreate( BlinkLED,
( void * ) pattern2,
( void * ) &BlinkTaskStack[USER_TASK_STK_SIZE],
( void * ) BlinkTaskStack,
MAIN_PRIO - 1 ) != OS_NO_ERR )
{
iprintf( "*** Error creating remote blink task\r\n" );
}
but I get the error 'array must be initialized with a brace-enclosed initializer'.
What is the right way to do this?
First, ParseBlinkOnCommand returns reference to local object and so return dangling reference.
Second C-array are not copyable, so int pattern2[3] = ParseBlinkOnCommand(rxbuffer); should be int (&pattern2)[3] = ParseBlinkOnCommand(rxbuffer);.
but why not using std::vector or std::array (or custom structure) ?
std::vector<int> ParseBlinkOnCommand(const char (&rxbuffer)[3])
{
// Code parses rxbuffer and creates the 3 ints needed
return {repeats, onTicks, offTicks};
}
And then
auto pattern2 = ParseBlinkOnCommand(rxbuffer);
if ( OSTaskCreate( BlinkLED,
pattern2.data(),
&BlinkTaskStack[USER_TASK_STK_SIZE],
BlinkTaskStack,
MAIN_PRIO - 1 ) != OS_NO_ERR )
{
iprintf( "*** Error creating remote blink task\r\n" );
}

NSInvocation not passing pointer to c++ array

I think I'm making just a fundamental mistake, but I cannot for the life of me see it.
I'm calling a method on an Objective-C object from within a C++ class (which is locked). I'm using NSInvocation to prevent me from having to write hundreds methods just to access the data in this other object.
These are the steps I'm going through. This is my first call, and I want to pass s2. I can't really provide a compilable example, but hopefully it's just a DUHRRRRR problem on my part.
float s2[3];
id args2s[] = {(id)&_start.x(),(id)&_start.y(),(id)&s2};
_view->_callPixMethod(#selector(convertPixX:pixY:toDICOMCoords:),3,args2s);
This is the View method being called
invokeUnion View::_callPixMethod(SEL method, int nArgs, id args[])
{
DataModule* data;
DataVisitor getdata(&data);
getConfig()->accept(getdata);
invokeUnion retVal;
retVal.OBJC_ID = data->callPixMethod(_index, _datasetKey, method, nArgs, args);
return retVal;
}
Invoke Union is a union so I can get the float value returned by NSInvocation.
union invokeUnion {
id OBJC_ID;
int intValue;
float floatValue;
bool boolValue;
};
This is the method in the data Object (pthread locked with lock() and unlock());
id DataModule::callPixMethod(int index, std::string predicate, SEL method, int nArgs, id args[] )
{
// May Block
DCMPix *pix =[[getSeriesData(predicate) pix] objectAtIndex:index];
lock();
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSMethodSignature *signature;
NSInvocation *invocation;
signature = [DCMPix instanceMethodSignatureForSelector:method];
invocation = [NSInvocation invocationWithMethodSignature:signature];
[invocation setSelector:method];
[invocation setTarget:pix];
if (nArgs > 0) for (int n = 0; n < nArgs; n++) {
SFLog(#"invocation: i=%d, *ptr=0x%x, valf=%f, vald=%d",n,args[n],*args[n],*args[n]);
[invocation setArgument:args[n] atIndex:2+n];
}
id retVal;
[invocation invoke];
[invocation getReturnValue:&retVal];
[pool release];
unlock();
return retVal;
}
The method in the DCMPix object (which I can't modify, it's part of a library) is the following:
-(void) convertPixX: (float) x pixY: (float) y toDICOMCoords: (float*) d pixelCenter: (BOOL) pixelCenter
{
if( pixelCenter)
{
x -= 0.5;
y -= 0.5;
}
d[0] = originX + y*orientation[3]*pixelSpacingY + x*orientation[0]*pixelSpacingX;
d[1] = originY + y*orientation[4]*pixelSpacingY + x*orientation[1]*pixelSpacingX;
d[2] = originZ + y*orientation[5]*pixelSpacingY + x*orientation[2]*pixelSpacingX;
}
-(void) convertPixX: (float) x pixY: (float) y toDICOMCoords: (float*) d
{
[self convertPixX: x pixY: y toDICOMCoords: d pixelCenter: YES];
}
It's crashing when it tries to access d[0]. BAD_EXC_ACCESS which I know means it's accessing released memory, or memory outside of it's scope.
I'm getting lost keeping track of my pointers to pointers. the two float values come across fine (as does other info in other methods) but this is the only one asking for a float* as a parameter. From what I understand the convertPixX: method was converted over from a C program written for Mac OS 9... which is why it asks for the c-array as an out value... I think.
Anyway, any insight would be greatly appreciated.
I've tried sending the value like this:
float *s2 = new float[3];
void* ps2 = &s2;
id args2s[] = {(id)&_start.x(),(id)&_start.y(),(id)&ps2};
_view->_callPixMethod(#selector(convertPixX:pixY:toDICOMCoords:),3,args2s);
But that gives a SIGKILL - plus I'm sure it's bogus and wrong. ... but I tried.
anyway... pointers! cross-language! argh!
Thanks,
An array is not a pointer. Try adding the following line
NSLog(#"%p, %p", s2, &s2);
just above.
id args2s[] = {(id)&_start.x(),(id)&_start.y(),(id)&s2};
s2 and &s2 are both the address of the first float in your array, so when you do:
[invocation setArgument:args[n] atIndex:2+n];
for n = 2, you are not copying in a pointer to the first float, but the first float, possibly the first two floats if an id is 64 bits wide.
Edit:
To fix the issue, this might work (not tested).
float s2[3];
float* s2Pointer = s2;
id args2s[] = {(id)&_start.x(),(id)&_start.y(),(id)&s2Pointer};
_view->_callPixMethod(#selector(convertPixX:pixY:toDICOMCoords:),3,args2s);
s2Pointer is a real pointer that will give you the double indirection you need.

Function has corrupt return value

I have a situation in Visual C++ 2008 that I have not seen before. I have a class with 4 STL objects (list and vector to be precise) and integers.
It has a method:
inline int id() { return m_id; }
The return value from this method is corrupt, and I have no idea why.
debugger screenshot http://img687.imageshack.us/img687/6728/returnvalue.png
I'd like to believe its a stack smash, but as far as I know, I have no buffer over-runs or allocation issues.
Some more observations
Here's something that puts me off. The debugger prints right values in the place mentioned // wrong ID.
m_header = new DnsHeader();
assert(_CrtCheckMemory());
if (m_header->init(bytes, size))
{
eprintf("0The header ID is %d\n", m_header->id()); // wrong ID!!!
inside m_header->init()
m_qdcount = ntohs(h->qdcount);
m_ancount = ntohs(h->ancount);
m_nscount = ntohs(h->nscount);
m_arcount = ntohs(h->arcount);
eprintf("The details are %d,%d,%d,%d\n", m_qdcount, m_ancount, m_nscount, m_arcount);
// copy the flags
// this doesn't work with a bitfield struct :(
// memcpy(&m_flags, bytes + 2, sizeof(m_flags));
//unpack_flags(bytes + 2); //TODO
m_init = true;
}
eprintf("Assigning an id of %d\n", m_id); // Correct ID.
return
m_header->id() is an inline function in the header file
inline int id() { return m_id; }
I don't really know how best to post the code snippets I have , but here's my best shot at it. Please do let me know if they are insufficient:
Class DnsHeader has an object m_header inside DnsPacket.
Main body:
DnsPacket *p ;
p = new DnsPacket(r);
assert (_CrtCheckMemory());
p->add_bytes(buf, r); // add bytes to a vector m_bytes inside DnsPacket
if (p->parse())
{
read_packet(sin, *p);
}
p->parse:
size_t size = m_bytes.size(); // m_bytes is a vector
unsigned char *bytes = new u_char[m_bytes.size()];
copy(m_bytes.begin(), m_bytes.end(), bytes);
m_header = new DnsHeader();
eprintf("m_header allocated at %x\n", m_header);
assert(_CrtCheckMemory());
if (m_header->init(bytes, size)) // just set the ID and a bunch of other ints here.
{
size_t pos = DnsHeader::SIZE; // const int
if (pos != size)
; // XXX perhaps generate a warning about extraneous data?
if (ok)
m_parsed = true;
}
else
{
m_parsed = false;
}
if (!ok) {
m_parsed = false;
}
return m_parsed;
}
read_packet:
DnsHeader& h = p.header();
eprintf("The header ID is %d\n", h.id()); // ID is wrong here
...
DnsHeader constructor:
m_id = -1;
m_qdcount = m_ancount = m_nscount = m_arcount = 0;
memset(&m_flags, 0, sizeof(m_flags)); // m_flags is a struct
m_flags.rd = 1;
p.header():
return *m_header;
m_header->init: (u_char* bytes, int size)
header_fmt *h = (header_fmt *)bytes;
m_id = ntohs(h->id);
eprintf("Assigning an id of %d/%d\n", ntohs(h->id), m_id); // ID is correct here
m_qdcount = ntohs(h->qdcount);
m_ancount = ntohs(h->ancount);
m_nscount = ntohs(h->nscount);
m_arcount = ntohs(h->arcount);
You seem to be using a pointer to an invalid class somehow. The return value shown is the value that VS usually uses to initialize memory with:
2^32 - 842150451 = 0xCDCDCDCD
You probably have not initialized the class that this function is a member of.
Without seeing more of the code in context.. it might be that the m_id is out of the scope you expect it to be in.
Reinstalled VC++. That fixed everything.
Thank you for your time and support everybody! :) Appreciate it!