Warning C6386 - Buffer overrun while writing to 'LINES_DATA.Lines' - c++

I know this question has been asked before, but I couldn't quite fix my code, even reading other topics.
Does anyone know why is it throwing this warning?
Warning C6386 Buffer overrun while writing to 'LINES_DATA.Lines': the writable size is 'LINES_DATA.NumLines4' bytes, but '8' bytes might be written.*
"
LINES_DATA.NumLines = line_i; //line_i = 100
LINES_DATA.Lines = new int* [LINES_DATA.NumLines];
line_i = 0;
for (rapidxml::xml_node<>* pNode = pRoot->first_node(); pNode; pNode = pNode->next_sibling())
{
LINES_DATA.Lines[line_i] = new int[COLUMNSIZE]; //COLUMNSIZE = 5
for (int pos_i = 0; pos_i < COLUMNSIZE; pos_i++)
{
LINES_DATA.Lines[line_i][pos_i] = pNode->value()[pos_i] - '0';
}
line_i++;
}
I get the warning in this line:
LINES_DATA.Lines[line_i] = new int[COLUMNSIZE];
Thank you so much

If the array (LINES_DATA.Lines) hasline_i elements then LINES_DATA.Lines[line_i] is not valid.
Arrays are zero based so LINES_DATA.Lines has elements 0 to line_i-1

It's just a Code Analysis warning. The compiler isn't smart enough to work out your program's entire runtime behaviour.
Your code does have major risks of buffer over-runs, particularly if the XML contains more than 100 elements. You should be using smart pointers and/or STL containers here.

Related

How can I fix c6385?

// Take last element from deck and add to dealer's hand
// Update current elements after
//Ensure the deck still has cards
if (deck.currentElements == 0) {
getNewDeck(deck);
shuffleDeck(deck);
}
deck.currentElements -= 1;
dealerCards.currentElements += 1;
dealerCards.Cards[dealerCards.currentElements] = deck.Cards[deck.currentElements];
// Update the deck array by decreasing size
// hence used cards are removed
Card* temp = deck.Cards;
deck.Cards = new Card[deck.currentElements];
for (int i = 0; i < deck.currentElements; i++) {
deck.Cards[i] = temp[i];
}
// Delete memory associated with temp
delete[] temp;
Hi, i am getting the following error on "deck.Cards[i] = temp[i];": C6385 Reading invalid data from 'deck.cards': the readable size is '(unsigned int)*64+4 bytes', but '128 bytes' may be used.
What am I doing wrong, and how can I fix this? The problem came up when I added the if statement seen at the top. Is there a chance that this could simply be a false warning? I am using visual studios
dealerCards.Cards[dealerCards.currentElements] will not be assigned for dealerCards.Cards[0]; there will be a hole.
--deck.currentElements;
dealerCards.Cards[dealerCards.currentElements] = deck.Cards[deck.currentElements];
++dealerCards.currentElements;
This assumes that a valid index is in 0 .. (currentElements-1).
The error however is on the deck, but is probably of very similar code elsewhere.
As C level code is very basic (arrays) and error prone, better switch to higher types like vector.

How to resolve this C6385 code analysis warning: Reading invalid data

I am trying to address a code analysis warning that appears in the following method:
CStringArray* CCreateReportDlg::BuildCustomAssignArray(ROW_DATA_S &rsRowData)
{
INT_PTR iAssign, iNumAssigns, iUsedAssign;
CStringArray *pAryStrCustom = nullptr;
CUSTOM_ASSIGN_S *psAssign = nullptr;
if (rsRowData.uNumCustomToFill > 0)
{
pAryStrCustom = new CStringArray[rsRowData.uNumCustomToFill];
iNumAssigns = m_aryPtrAssign.GetSize();
for (iAssign = 0, iUsedAssign = 0; iAssign < iNumAssigns; iAssign++)
{
psAssign = (CUSTOM_ASSIGN_S*)m_aryPtrAssign.GetAt(iAssign);
if (psAssign != nullptr)
{
if (!psAssign->bExcluded)
{
pAryStrCustom[iUsedAssign].Copy(psAssign->aryStrBrothersAll);
iUsedAssign++;
}
}
}
}
return pAryStrCustom;
}
The offending line of code is:
pAryStrCustom[iUsedAssign].Copy(psAssign->aryStrBrothersAll);
I compile this code for both 32 bit and 64 bit. The warning being raised is:
Warning (C6385) Reading invalid data from pAryStrCustom: the readable size is (size_t)*40+8 bytes, but 80 bytes may be read.
I don't know if it is relevant, but the CUSTOM_ASSIGN_S structure is defined as:
typedef struct tagCustomAssignment
{
int iIndex;
CString strDescription;
CString strHeading;
BOOL bExcluded;
CStringArray aryStrBrothersAll;
CStringArray aryStrBrothersWT;
CStringArray aryStrBrothersSM;
BOOL bIncludeWT;
BOOL bIncludeTMS;
BOOL bFixed;
int iFixedType;
} CUSTOM_ASSIGN_S;
My code is functional (for years) but is there a coding improvement I can make to address this issue? I have read the linked article and it is not clear to me. I have also seen this question (Reading Invalid Data c6385) along similar lines. But in my code I can't see how that applies.
Warning... the readable size is (size_t)*40+8 bytes, but 80 bytes may be read.
The wording for this warning is not accurate, because size_t is not a number, it's a data type. (size_t)*40+8 doesn't make sense. It's probably meant to be:
Warning... the readable size is '40+8 bytes', but '80 bytes' may be read.
This warning can be roughly reproduced with the following example:
//don't run this code, it's just for viewing the warning
size_t my_size = 1;
char* buf = new char[my_size];
buf[1];
//warning C6385: Reading invalid data from 'buf':
//the readable size is 'my_size*1' bytes, but '2' bytes may be read
The warning is correct and obvious. buf[1] is out of bound. The compiler sees allocation size for buf is my_size*1, and index 1 is accessing byte '2'. I think in other place the compiler prints it incorrectly, but the actual warning is valid.
In any case, just make sure iUsedAssign is within range
if (!psAssign->bExcluded && iUsedAssign < rsRowData.uNumCustomToFill)
{
...
}

Why does this fix a heap corruption?

So I've got code:
float **array = new float*[width + 1]; //old line was '= new float*[width]'
//Create dynamic 2D array
for (int i = 0; i < width; ++i) {
array[i] = new float[height + 1]; //old line was '= new float[height]'
}
//Hardcode 2D array for testing
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
array[i][j] = i + j;
}
}
//deallocate heap memory
for (int i = 0; i < width; ++i) {
delete [] array[i]; //Where corrupted memory error used to be
}
delete [] array;
(For the record, I know it would be more efficient to allocate a single block of memory, but I work closely with scientists who would never understand why/how to use it. Since it's run on servers, the bosses say this is preferred.)
My question is why does the height+1/width+1 fix the corrupted memory issue? I know the extra space is for the null terminator, but why is it necessary? And why did it work when height and width were the same, but break when they were different?
SOLN:
I had my height/width backwards while filling my array... -.-; Thank you to NPE.
The following comment is a red herring:
delete [] array[i]; //Where corrupted memory error used to be
This isn't where the memory error occurred. This is where it got detected (by the C++ runtime). Note that the runtime isn't obliged to detect this sort of errors, so in a way it's doing you a favour. :-)
You have a buffer overrun (probably an off-by-one error in a loop) in the part of your code that you're not showing.
If you can't find it by examining the code, try Valgrind or -fsanitize=address in GCC.
edit: The issue with the code that you've added to the question:
//Hardcode 2D array for testing
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
array[i][j] = i + j;
}
}
is that it has width and height (or, equivalently, i and j) the wrong way round. Unless width == height, your code has undefined behaviour.
Changing height and weight by height+1 and weight+1 is probably not going to be enough.
The code you posted was correct with height and weight.
This means that something was likely writing just past the end of those arrays in some other part of the code, and when you grew those arrays, it made the faulty code write right at the end of the arrays instead of crashing. You didn't fix the issue, you just hid it.
The code actually crashed on the delete[] due to some limitations in how the OS detects heap corruptions. Very often off-by-one errors on the heap will be detected by the next call to new/delete/malloc/free, not when they actually happen.
You can use tools like Valgrind if you want to know exactly when and where your program does illegal things with pointers.
You didn't fix the code. What you are doing is changing the executable with the new code, thus moving the corruption bug to another part of your program.
One thing you should not do -- do not change your program to the one you say "works" with the + 1 and then accept it. I know it may be tempting if the bug is hard to diagnose, but don't go this route.
What you must do is go back to the non-working version, and really fix the issue. By "fix", meaning you can explain what the fix does, why it fixes the problem, etc.

Heap Corruption detected

This is the way that i have alocated the memory.
Expression = new char[MemBlock.length()];
VarArray = new char[Variables.length()];
for (unsigned int i = 0; i < MemBlock.length(); i++)
{
Expression[i] = MemBlock.at(i);
}
Expression[MemBlock.length() + 1] = NULL;
for (unsigned int i = 0; i < Variables.length(); i++)
{
VarArray[i] = Variables.at(i);
}
VarArray[Variables.length() + 1] = NULL;
when i try to delete it i get the error...
Logic::~Logic(){
delete[] VarArray; -> happens on this line.
VarArray = NULL;
delete[] Expression;
Expression = NULL;
}
in the entire code i dont not make any changes to the new array yet it tells me i hame some currpution, i cant pin point the problem , any help would be great.
VarArray[Variables.length() + 1] = NULL;
accesses memory you do not own since this array is allocated thus:
VarArray = new char[Variables.length()];
The final element in this array has index Variables.length() - 1.
Running this in a debugger ought be be instructive. Some static analysis tools (eg. lint) would highlight this misuse, I believe.
You could also consider using boost::scoped_array or similar to remove the need for manual deletion. A good lesson to learn early on for C++ is to adopt RAII instead of manual memory management, wherever you can.
VarArray = new char[Variables.length()];
VarArray[Variables.length() + 1] = NULL;
You can´t do that, it´s 2 elements to much.
Same for the other array.
Expression[MemBlock.length() + 1] = NULL;
Is Undefined Behavior. As is
VarArray[Variables.length() + 1] = NULL;
In the 1st case you can only index up to MemBlock.length() - 1, and in the 2nd case Variables.length() - 1.
In both cases you are writing past the end of the allocated array and are probably overwriting the control structures used (by the runtime library) to manage dynamically allocated memory.

Strange segfault with tinyxml2

I've a segfault that I don't understand.
It always occurs at i = 0 and j between 1000 and 1100.
Here is the backtrace and all the sources required to see the problem: https://gist.github.com/Quent42340/7592902
Please help me.
EDIT: Oh I forgot. On my gist map.cpp:72 is commented. It's commented in my source code too. I did that to see where the problem came from, but even without that line, the problem is still here.
I see you allocate an array of pointers here:
m_data = new u16*[m_layers];
But, I never see you allocate the second dimension to this array. It seems like you ought to allocate the rows of your map, either as one large chunk of memory that you separate into chunks yourself, or new each row.
For example, if you add one line to your for (i ...) loop:
for(u8 i = 0 ; i < m_layers ; i++) {
m_data[i] = new u16[m_width * m_height];
If you go that route, you'll also need to upgrade your destructor:
Map::~Map() {
// WARNING: This doesn't handle the case where the map failed to load...
// Exercise for the reader.
for (u8 i = 0; i < m_layers; i++) {
delete[] m_data[i];
}
delete[] m_data;
}
An alternate approach would be to use std::array and let the C++ standard library manage that for you.