GCC version: gcc 4.8.5
copt: -std=c++11 -O3
SIZE = 50 * 1024 * 1024
The first piece of code:
int main() {
char* src = new char[SIZE];
char* dst = new char[SIZE];
memset(dst, 'a', SIZE);
for (size_t i = 0; i < 5; ++i) {
size_t start = now();
memcpy(dst, src, SIZE);
cout << "timer:" << now() - start << "ms" << endl;
}
return 0;
}
Output:
timer:5ms
timer:4ms
timer:5ms
timer:5ms
timer:4ms
The second piece of code:
int main() {
char* src = new char[SIZE];
char* dst = new char[SIZE];
memset(src, 'a', SIZE);
memset(dst, 'a', SIZE);
for (size_t i = 0; i < 5; ++i) {
size_t start = now();
memcpy(dst, src, SIZE);
cout << "timer:" << now() - start << "ms" << endl;
}
return 0;
}
Output:
timer:9ms
timer:8ms
timer:8ms
timer:8ms
timer:8ms
The third piece of code:
int main() {
char* src = new char[SIZE];
char* dst = new char[SIZE];
for (size_t i = 0; i < 5; ++i) {
size_t start = now();
memcpy(dst, src, SIZE);
cout << "timer:" << now() - start << "ms" << endl;
}
return 0;
}
Output:
timer:22ms
timer:4ms
timer:5ms
timer:5ms
timer:5ms
Summary:
Compare first and third case: first round of 3rd case slow is because of minor page fault.
Questions:
Why in the 1st case, memcpy src wouldn't trigger any minor page fault?
Why in the 2nd case, 1x slower than 1st case. Any optimization in OS?
Memcpy is bounded by external memory throughput; it looks like the OS is able to allocate memory virtually into the page tables and performing Copy-on-write. This would explain both phenomena: there would be only one reserved block of physical memory for unmodified src, which would be located in the fastest cache in cases 2 and 3. In case one all memory access would go up and down to external memory. The 5x speed penalty in run 1 of case 2 is due to the virtually allocated src being copied on write to unique physical pages.
Timing the initial memsets N times in a row should confirm the hypothesis.
The copy-on-write technique can be extended to support efficient memory allocation by having a page of physical memory filled with zeros. When the memory is allocated, all the pages returned refer to the page of zeros and are all marked copy-on-write. This way, physical memory is not allocated for the process until data is written, allowing processes to reserve more virtual memory than physical memory and use memory sparsely, at the risk of running out of virtual address space.
Related
I build a simple cuda kernel that performs a sum on elements. Each thread adds an input value to an output buffer. Each thread calculates one value. 2432 threads are being used (19 blocks * 128 threads).
The output buffer remains the same, the input buffer pointer is shifted by threadcount after each kernel execution. So in total, we have a loop invoking the add kernel until we computed all input data.
Example:
All my input values are set to 1. The output buffer size is 2432. The input buffer size is 2432 *2000.
2000 times the add kernel is called to add 1 to each field of output. The endresult in output is 2000 at every field. I call the function aggregate which contains a for loop, calling the kernel as often as needed to pass over the complete input data.
This works so far unless I call the kernel too often.
However if I call the Kernel 2500 times, I get an illegalmemoryaccess cuda error.
As you can see, the runtime of the last successfull kernel increases by 3 orders of magnitude. Afterwards my pointers are invalidated and the following invocations result in CudaErrorIllegalAdress.
I cleaned up the code to get a minimal working example:
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <vector>
#include <stdio.h>
#include <iostream>
using namespace std;
template <class T> __global__ void addKernel_2432(int *in, int * out)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
out[i] = out[i] + in[i];
}
static int aggregate(int* array, size_t size, int* out) {
size_t const vectorCount = size / 2432;
cout << "ITERATIONS: " << vectorCount << endl;
for (size_t i = 0; i < vectorCount-1; i++)
{
addKernel_2432<int><<<19,128>>>(array, out);
array += vectorCount;
}
addKernel_2432<int> << <19, 128 >> > (array, out);
return 1;
}
int main()
{
int* dev_in1 = 0;
size_t vectorCount = 2432;
int * dev_out = 0;
size_t datacount = 2432*2500;
std::vector<int> hostvec(datacount);
//create input buffer, filled with 1
std::fill(hostvec.begin(), hostvec.end(), 1);
//allocate input buffer and output buffer
cudaMalloc(&dev_in1, datacount*sizeof(int));
cudaMalloc(&dev_out, vectorCount * sizeof(int));
//set output buffer to 0
cudaMemset(dev_out, 0, vectorCount * sizeof(int));
//copy input buffer to GPU
cudaMemcpy(dev_in1, hostvec.data(), datacount * sizeof(int), cudaMemcpyHostToDevice);
//call kernel datacount / vectorcount times
aggregate(dev_in1, datacount, dev_out);
//return data to check for corectness
cudaMemcpy(hostvec.data(), dev_out, vectorCount*sizeof(int), cudaMemcpyDeviceToHost);
if (cudaSuccess != cudaMemcpy(hostvec.data(), dev_out, vectorCount * sizeof(int), cudaMemcpyDeviceToHost))
{
cudaError err = cudaGetLastError();
cout << " CUDA ERROR: " << cudaGetErrorString(err) << endl;
}
else
{
cout << "NO CUDA ERROR" << endl;
cout << "RETURNED SUM DATA" << endl;
for (int i = 0; i < 2432; i++)
{
cout << hostvec[i] << " ";
}
}
cudaDeviceReset();
return 0;
}
If you compile and run it, you get an error.
Change:
size_t datacount = 2432 * 2500;
to
size_t datacount = 2432 * 2400;
and it gives the correct results.
I am looking for any ideas, why it breaks after 2432 kernel invocations.
What i have found so far googeling around:
Wrong target architecture set. I use a 1070ti. My target is set to: compute_61,sm_61 In visual studio project properties. That does not change anything.
Did I miss something? Is there a limit how many times a kernel can be called until cuda invalidates pointer? Thank you for your help. I used windows, Visual Studio 2019 and CUDA runtime 11.
This is the output in both cases. Succes and failure:
[
Error:
[
static int aggregate(int* array, size_t size, int* out) {
size_t const vectorCount = size / 2432;
for (size_t i = 0; i < vectorCount-1; i++)
{
array += vectorCount;
}
}
That's not vectorCount but the number of iterations you have been accidentally incrementing by. Works fine while vectorCount <= 2432 (but yields wrong results), and results in buffer overflow above.
array += 2432 is what you intended to write.
Problem and Code
I am working with code to take a screenshot on a Raspberry Pi. Using some magic from the VC handler, I can take a screenshot and store it in memory with calloc. I can use this to store the data in a file as a ppm image with the requisite header using:
void * image;
image = calloc(1, width * 3 * height);
// code to store data into *image
FILE *fp = fopen("myfile.ppm", "wb");
fprintf(fp, "P6\n%d %d\n255\n", width, height);
fwrite(image, width*3*height, 1, fp);
fclose(fp);
This successfully stores the data. I can access it and view it normally.
However, if I instead try to inspect the data which are being put into the file for debugging purposes by printing:
int cnt = 0;
std::string imstr = (char *)image;
for (int i=0; i<(width*3*height); i++) {
std::cout << (int)imstr[i] << " " << cnt << std::endl;
cnt += 1;
}
I segfault early. The numbers which are returned in the print make sense for the context (e.g. color values <255)
Example Numbers
In the case of a 1280 x 768 x 3 image, my cnt stops at 64231. The value it stops at doesn't seem to have any relation to the sizeof char or int.
I think I'm missing something obvious here, but I can't see it. Any suggestions?
very probably you have at least a null character in (char *)image, so the std::string length is shorter than width*3*height due to its initialization because only the characters up to that first null character are used
use a std::array rather than a std::stringinitialized like that
The way you are converting the image data to a std::string is wrong. If the image's raw data contains any 0x00 bytes then the std::string will be truncated, causing your loop to access out of bounds of the std::string. And if the image's raw data does not contain any 0x00 bytes then the std::string constructor will try to read past the bounds of the image's allocated memory.
You need to take the image's size into account when constructing the std::string, eg:
size_t cnt = 0;
std::string imstr(static_cast<char*>(image), width*3*height);
for (size_t i = 0; i < imstr.size(); ++i) {
std::cout << static_cast<int>(imstr[i]) << " " << cnt << std::endl;
++cnt;
}
Otherwise, simply don't convert the image to std::string at all. You can iterate the image's raw data directly instead, eg:
size_t cnt = 0, imsize = width*3*height;
char *imdata = static_cast<char*>(image);
for (size_t i = 0; i < imsize; ++i) {
std::cout << static_cast<int>(imdata[i]) << " " << cnt << std::endl;
++cnt;
}
I am trying to learn both details on memory usage works, as well as how to measure it using C++. I know that under Windows, a quick way to retrieve the amount of RAM being used by the current application process, when including <Windows.h>, is:
PROCESS_MEMORY_COUNTERS info;
GetProcessMemoryInfo( GetCurrentProcess( ), &info, sizeof(info) );
(uint64_t)info.WorkingSetSize;
Then, I used that to run a very simple test:
#include <iostream>
#include <Windows.h>"
int main(void)
{
uint64_t currentUsedRAM(0);
PROCESS_MEMORY_COUNTERS info;
GetProcessMemoryInfo(GetCurrentProcess(), &info, sizeof(info));
currentUsedRAM = info.WorkingSetSize;
const int N(1000000);
int x[N]; //in the second run, comment this line out
int y[N]; //in the second run, comment this line out
//int *x = new int[N]; //in the second run UNcomment this line out
//int *y = new int[N]; //in the second run UNcomment this line out
for (int i = 0; i < N; i++)
{
x[i] = 1;
y[i] = 2;
}
GetProcessMemoryInfo(GetCurrentProcess(), &info, sizeof(info));
currentUsedRAM = info.WorkingSetSize - currentUsedRAM;
std::cout << "Current RAM used: " << currentUsedRAM << "\n";
return 0;
}
What I don't understand at all when I run the code above, the output is: Current RAM used: 0, while I was expecting something around 8mb since I filled two 1D int arrays of 1 million entries each. Now, if I re-run the code but making x and y become dinamically allocated arrays, now the output is, as expected: Current RAM used: 8007680.
Why is that? How to make it detect memory-usage in both cases?
The compiler have optimised your code. If fact, for your first run, neither x or y is allocated. Considering that there is visible side effect : the return value of GetProcessMemoryInfo, this optimiszation seems kind of weird.
Anyway, you can prevent this by adding some other side effect, such as outputing the sum of each element of those two array, which will guarateen the crashing.
The memory allocating for local objects with automatic storage duration happens at the beginning of the enclosing code block and deallocated at the end. So your code can't measure the memory usage for any automatic sotrage duration variable in main(nor my deleted code snippet, Which I wasn't awared of). But things are different for those objects with dynamic storage duration, they are allocated per request.
I designed a test which involves recusion for the discussion in comment area. You can see that the memory usage increased if the program goes deeper. This is a proof to that it counts the memroy usage on stack. BTW, it isn't counting how many memory your objects need, but how many your program needs.
void foo(int depth, int *a, int *b, uint64_t usage) {
if (depth >= 100)
return ;
int x[100], y[100];
for (int i = 0; i < 100; i++)
{
x[i] = 1 + (a==nullptr?0:a[i]);
y[i] = 2 + (b==nullptr?0:b[i]);
}
PROCESS_MEMORY_COUNTERS info;
GetProcessMemoryInfo(GetCurrentProcess(), &info, sizeof(info));
std::cout << "Current RAM used: " << info.WorkingSetSize - usage << "\n";
foo(depth+1,x,y,usage);
int sum = 0;
for (int i=0; i<100; i++)
sum += x[i] + y[i];
std::cout << sum << std::endl;
}
int main(void)
{
uint64_t currentUsedRAM(0);
PROCESS_MEMORY_COUNTERS info;
GetProcessMemoryInfo(GetCurrentProcess(), &info, sizeof(info));
currentUsedRAM = info.WorkingSetSize;
foo(0, nullptr, nullptr, currentUsedRAM);
return 0;
}
/*
Current RAM used: 0
Current RAM used: 61440
Current RAM used: 65536
Current RAM used: 65536
Current RAM used: 65536
Current RAM used: 65536
Current RAM used: 69632
Current RAM used: 69632
Current RAM used: 69632
Current RAM used: 69632
Current RAM used: 69632
Current RAM used: 73728
*/
The system allocate 4k each time, which is the size of a page. I don't know why it comes 0, and then suddenly 61440. Explaining how windows manages the memory is very hard and is far beyond my ability, though I have confident in the 4k thing... and that it do count the memory usage for variables with automatic storage duration.
This is an empirical assumption (that allocating is faster then de-allocating).
This is also one of the reason, i guess, why heap based storages (like STL containers or else) choose to not return currently unused memory to the system (that is why shrink-to-fit idiom was born).
And we shouldn't confuse, of course, 'heap' memory with the 'heap'-like data structures.
So why de-allocation is slower?
Is it Windows-specific (i see it on Win 8.1) or OS independent?
Is there some C++ specific memory manager automatically involved on using 'new' / 'delete' or the whole mem. management is completely relies on the OS? (i know C++11 introduced some garbage-collection support, which i never used really, better relying on the old stack and static duration or self managed containers and RAII).
Also, in the code of the FOLLY string i saw using old C heap allocation / deallocation, is it faster then C++ 'new' / 'delete'?
P. S. please note that the question is not about virtual memory mechanics, i understand that user-space programs didn't use real mem. addresation.
The assertion that allocating memory is faster than deallocating it seemed a bit odd to me, so I tested it. I ran a test where I allocated 64MB of memory in 32-byte chunks (so 2M calls to new), and I tried deleting that memory in the same order it was allocated, and in a random order. I found that linear-order deallocation was about 3% faster than allocation, and that random deallocation was about 10% slower than linear allocation.
I then ran a test where I started with 64MB of allocated memory, and then 2M times either allocated new memory or deleted existing memory (at random). Here, I found that deallocation was about 4.3% slower than allocation.
So, it turns out you were correct - deallocation is slower than allocation (though I wouldn't call it "much" slower). I suspect this has simply to do with more random accesses, but I have no evidence for this other than that the linear deallocation was faster.
To answer some of your questions:
Is there some C++ specific memory manager automatically involved on using 'new' / 'delete'?
Yes. The OS has system calls which allocate pages of memory (typically 4KB chunks) to processes. It's the process' job to divide up those pages into objects. Try looking up the "GNU Memory Allocator."
I saw using old C heap allocation / deallocation, is it faster then C++ 'new' / 'delete'?
Most C++ new/delete implementations just call malloc and free under the hood. This is not required by the standard, however, so it's a good idea to always use the same allocation and deallocation function on any particular object.
I ran my tests with the native testing framework provided in Visual Studio 2015, on a Windows 10 64-bit machine (The tests were also 64-bit). Here's the code:
#include "stdafx.h"
#include "CppUnitTest.h"
using namespace Microsoft::VisualStudio::CppUnitTestFramework;
namespace AllocationSpeedTest
{
class Obj32 {
uint64_t a;
uint64_t b;
uint64_t c;
uint64_t d;
};
constexpr int len = 1024 * 1024 * 2;
Obj32* ptrs[len];
TEST_CLASS(UnitTest1)
{
public:
TEST_METHOD(Linear32Alloc)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
}
TEST_METHOD(Linear32AllocDealloc)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
for (int i = 0; i < len; ++i) {
delete ptrs[i];
}
}
TEST_METHOD(Random32AllocShuffle)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
srand(0);
for (int i = 0; i < len; ++i) {
int pos = (rand() % (len - i)) + i;
Obj32* temp = ptrs[i];
ptrs[i] = ptrs[pos];
ptrs[pos] = temp;
}
}
TEST_METHOD(Random32AllocShuffleDealloc)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
srand(0);
for (int i = 0; i < len; ++i) {
int pos = (rand() % (len - i)) + i;
Obj32* temp = ptrs[i];
ptrs[i] = ptrs[pos];
ptrs[pos] = temp;
}
for (int i = 0; i < len; ++i) {
delete ptrs[i];
}
}
TEST_METHOD(Mixed32Both)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
srand(0);
for (int i = 0; i < len; ++i) {
if (rand() % 2) {
ptrs[i] = new Obj32();
}
else {
delete ptrs[i];
}
}
}
TEST_METHOD(Mixed32Alloc)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
srand(0);
for (int i = 0; i < len; ++i) {
if (rand() % 2) {
ptrs[i] = new Obj32();
}
else {
//delete ptrs[i];
}
}
}
TEST_METHOD(Mixed32Dealloc)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
srand(0);
for (int i = 0; i < len; ++i) {
if (rand() % 2) {
//ptrs[i] = new Obj32();
}
else {
delete ptrs[i];
}
}
}
TEST_METHOD(Mixed32Neither)
{
for (int i = 0; i < len; ++i) {
ptrs[i] = new Obj32();
}
srand(0);
for (int i = 0; i < len; ++i) {
if (rand() % 2) {
//ptrs[i] = new Obj32();
}
else {
//delete ptrs[i];
}
}
}
};
}
And here are the raw results over several runs. All numbers are in milliseconds.
I had much the same idea as #Basile: I wondered whether your base assumption was actually (even close to) correct. Since you tagged the question C++, I wrote a quick benchmark in C++ instead.
#include <vector>
#include <iostream>
#include <numeric>
#include <chrono>
#include <iomanip>
#include <locale>
int main() {
std::cout.imbue(std::locale(""));
using namespace std::chrono;
using factor = microseconds;
auto const size = 2000;
std::vector<int *> allocs(size);
auto start = high_resolution_clock::now();
for (int i = 0; i < size; i++)
allocs[i] = new int[size];
auto stop = high_resolution_clock::now();
auto alloc_time = duration_cast<factor>(stop - start).count();
start = high_resolution_clock::now();
for (int i = 0; i < size; i++)
delete[] allocs[i];
stop = high_resolution_clock::now();
auto del_time = duration_cast<factor>(stop - start).count();
std::cout << std::left << std::setw(20) << "alloc time: " << alloc_time << " uS\n";
std::cout << std::left << std::setw(20) << "del time: " << del_time << " uS\n";
}
I also used VC++ on Windows instead of gcc on Linux. The result wasn't much different though: freeing the memory took substantially less time than allocating it did. Here are the results from three successive runs.
alloc time: 2,381 uS
del time: 1,429 uS
alloc time: 2,764 uS
del time: 1,592 uS
alloc time: 2,492 uS
del time: 1,442 uS
I'd warn, however, allocation and freeing is handled (primarily) by the standard library, so this could be different between one standard library and another (even when using the same compiler). I'd also note that it wouldn't surprise me if this were to change somewhat in multi-threaded code. Although it's not actually correct, there appear to be a few authors who are under the mis-apprehension that freeing in a multithreaded environment requires locking a heap for exclusive access. This can be avoided, but the means to do so isn't necessarily immediately obvious.
I am not sure of your observation. I wrote the following program (on Linux, hopefully you could port it to your system).
// public domain code
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>
#include <string.h>
#include <assert.h>
const unsigned possible_word_sizes[] = {
1, 2, 3, 4, 5,
8, 12, 16, 24,
32, 48, 64, 128,
256, 384, 2048
};
long long totalsize;
// return a calloc-ed array of nbchunks malloced zones of
// somehow random size
void **
malloc_chunks (int nbchunks)
{
const int nbsizes =
(int) (sizeof (possible_word_sizes)
/ sizeof (possible_word_sizes[0]));
void **ad = calloc (nbchunks, sizeof (void *));
if (!ad)
{
perror ("calloc chunks");
exit (EXIT_FAILURE);
};
for (int ix = 0; ix < nbchunks; ix++)
{
unsigned sizindex = random () % nbsizes;
unsigned size = possible_word_sizes[sizindex];
void *zon = malloc (size * sizeof (void *));
if (!zon)
{
fprintf (stderr,
"malloc#%d (%d words) failed (total %lld) %s\n",
ix, size, totalsize, strerror (errno));
exit (EXIT_FAILURE);
}
((int *) zon)[0] = ix;
totalsize += size;
ad[ix] = zon;
}
return ad;
}
void
free_chunks (void **chks, int nbchunks)
{
// first, free the two thirds of chunks in random order
for (int i = 0; 3 * i < 2 * nbchunks; i++)
{
int pix = random () % nbchunks;
if (chks[pix])
{
free (chks[pix]);
chks[pix] = NULL;
}
}
// then, free the rest in reverse order
for (int i = nbchunks - 1; i >= 0; i--)
if (chks[i])
{
free (chks[i]);
chks[i] = NULL;
}
}
int
main (int argc, char **argv)
{
assert (sizeof (int) <= sizeof (void *));
int nbchunks = (argc > 1) ? atoi (argv[1]) : 32768;
if (nbchunks < 128)
nbchunks = 128;
srandom (time (NULL));
printf ("nbchunks=%d\n", nbchunks);
void **chks = malloc_chunks (nbchunks);
clock_t clomall = clock ();
printf ("clomall=%ld totalsize=%lld words\n",
(long) clomall, totalsize);
free_chunks (chks, nbchunks);
clock_t clofree = clock ();
printf ("clofree=%ld\n", (long) clofree);
return 0;
}
I compiled it with gcc -O2 -Wall mf.c -o mf on my Debian/Sid/x86-64 (i3770k, 16Gb). I run time ./mf 100000 and got:
nbchunks=100000
clomall=54162 totalsize=19115681 words
clofree=83895
./mf 100000 0.02s user 0.06s system 95% cpu 0.089 total
on my system clock gives CPU microseconds. If the call to random is negligible (and I don't know if it is) w.r.t. malloc & free time, I tend to disagree with your observations. free seems to be twice as fast as malloc. My gcc is 6.1, my libc is Glibc 2.22.
Please take time to compile the above benchmark on your system and report the timings.
FWIW, I took Jerry's code and
g++ -O3 -march=native jerry.cc -o jerry
time ./jerry; time ./jerry; time ./jerry
gives
alloc time: 1940516
del time: 602203
./jerry 0.00s user 0.01s system 68% cpu 0.016 total
alloc time: 1893057
del time: 558399
./jerry 0.00s user 0.01s system 68% cpu 0.014 total
alloc time: 1818884
del time: 527618
./jerry 0.00s user 0.01s system 70% cpu 0.014 total
When you allocate small memory blocks, the block size you specify maps directly to a suballocator for that size, which is commonly represented as a "slab" of memory containing same size records, to avoid memory fragmentation. This can be very fast, similar to an array access. But freeing such blocks is not so straight forward, because you are passing a pointer to memory of unknown size, requiring additional work to determine what slab it belongs to, before the block can be returned to its proper place.
When you allocate large blocks of virtual memory, a memory page range is set up in your process space without actually mapping any physical memory to it, and that requires very little work to accomplish. But freeing such large blocks can require much more work, because the pointer freed must first be matched to the page tables for that range, followed by walking through all of the page entries for the memory range that it spans, and releasing all of the physical memory pages assigned to that range by the intervening page faults.
Of course, the details of this will vary depending on the implementation being used, but the principles remain much the same: memory allocation of a known block size requires less effort than releasing a pointer to a memory block of unknown size. My knowledge of this comes directly from my experience developing high-performance commercial grade RAII memory allocators.
I should also point out that since every heap allocation has a matching and corresponding release, this pair of operations represents a single allocation cycle, i.e. as the two sides of one coin. Together, their execution time can be accurately measured, but separately such measurement is difficult to pin down, as it varies widely depending on block size, previous activity across similar sizes, caching and other operational considerations. But in the end, allocate/free differences may not much matter, since you don't do one without the other.
The problem here is heap fragmentation. Programs written in languages with explicit pointer arithmetic have no realistic ways of defragmenting heap.
If your heap is fragmented, you can't return memory to OS. OS, barring virtual memory, depends on brk(2)-like mechanism - i.e. you set an upper bound for all memory addresses you'll refer to. But when you have even one buffer allocated and still in use near existing boundary, you can't return memory to OS explicitly. Doesn't matter if 99% of all the memory in your program is freed.
Dealocation doesn't have to be slower than allocation. But the fact that you have manual deallocation with heap fragmenting makes allocation slower and more complex.
GCs fight this by compactifying heap. This way, allocation is just incrementing pointer for them, and deallocation is not needed for bulk of objects.
I have a function that allocated a buffer for the size of a file with
char *buffer = new char[size_of_file];
The i loop over the buffer and copy some of the pointers into a subbuffer to work with smaller units of it.
char *subbuffer = new char[size+1];
for (int i =0; i < size; i++) {
subbuffer[i] = (buffer + cursor)[i];
}
Next I call a function and pass it this subbuffer, and arbitrary cursor for a location in the subbuffer, and the size of text to be abstracted.
wchar_t* FileReader::getStringForSizeAndCursor(int32_t size, int cursor, char *buffer) {
int wlen = size/2;
#if MARKUP_SIZEOFWCHAR == 4 // sizeof(wchar_t) == 4
uint32_t *dest = new uint32_t[wlen+1];
#else
uint16_t *dest = new uint16_t[wlen+1];
#endif
char *bcpy = new char[size];
memcpy(bcpy, (buffer + cursor), size+2);
unsigned char *ptr = (unsigned char *)bcpy; //need to be careful not to read outside the buffer
for(int i=0; i<wlen; i++) {
dest[i] = (ptr[0] << 8) + ptr[1];
ptr += 2;
}
//cout << "size:: " << size << " wlen:: " << wlen << " c:: " << c << "\n";
dest[wlen] = ('\0' << 8) + '\0';
return (wchar_t *)dest;
}
I store this in a value as the property of a struct whilst looping through the file.
My issue seems to be when I free subbuffer, and start reading the title properties of my structs by looping over an array of struct pointers, my app segfaults. GDB tells me it finished normally though, but a bunch of records that I cout are missing.
I suspect this has to do with function scope of something. I thought the memcpy in getStringForSizeAndCursor would fix the segfault since it's copying bytes outside of subbuffer before I free. Right now I would expect those to then be cleaned up by my struct deconstructor, but either things are deconstructing before I expect or some memory is still pointing to the original subbuffer, if I let subbuffer leak I get back the data I expected, but this is not a solution.
The only definite error I can see in your question's code is the too small allocation of bcpy, where you allocate a buffer of size size and promptly copy size+2 bytes to the buffer. Since you're not using the extra 2 bytes in the code, just drop the +2 in the copy.
Besides that, I can only see one suspicious thing, you're doing;
char *subbuffer = new char[size+1];
and copying size bytes to the buffer. The allocation hints that you're allocating extra memory for a zero termination, but either it shouldn't be there at all (no +1) or you should allocate 2 bytes (since your function hints to a double byte character set. Either way, I can't see you zero terminating it, so use of it as a zero terminated string will probably break.
#Grizzly in the comments has a point too, allocating and handling memory for strings and wstrings is probably something you could "offload" to the STL with good results.