Issue
I did a C++ program running on an Arduino UNO.
I'm using ArduinoSTL only for his vector functionality, but it is using 459 bytes of the Arduino's RAM (23% of wasted RAM is way too much for what I do), and it takes 5 seconds more to compile with this library (which is annoying).
What I want to do
I want to be able to do change the size of arrays, maybe with vectors.
I don't want to use libraries that wastes RAM for nothing.
What I've though of doing
I though that, maybe I could remove everything but the vector function(s) of ArduinoSTL, so it will compile faster and take less RAM.
I attempted to do it.
In ArduinoSTL's directory, there is a lot of .cpp files with one of them named Vector.cpp.
My solution was to erase everything except Vector.cpp.
It didn't worked.
Solution 1
Use regular arrays!
Using a big regular array can be better sometime because ArduinoSTL takes a lot of memory and removing lets enough free space to use such simple arrays and the compiling is way faster.
In what I was doing, I didn't really needed a vector. A big array was enough.
Solution 2
Use stdlib.h!
It provides some functions such as malloc(), calloc(), realloc() and free().
The main advantage is that it is standard and is not only for Arduino.
Here is a piece of code in a regular C++ coding to show how to use it.
#include <stdlib.h> //Mandatory for memory manipulation
#include <stdio.h> //Used to for the command prompt
int main(){
uint8_t* MemBlock=nullptr; //Just a pointer pointing to nothing
int BlockSize=0; //The size of the block in bytes
bool Condition=true; //The condition for the if() statement
if(Condition){
BlockSize=64;
MemBlock=(uint8_t)malloc(BlockSize); //Allocates a Memory Block
uint8_t* P=MemBlock; //Defines a pointer that points to same location as the MemBlock pointer
for(int i=0;i<BlockSize;i++){
*P=i*23%101; //Change the pointed value by P
P++; //Moves to the next value (of MemBlock)
}
}else{
BlockSize=256;
MemBlock=(uint8_t)malloc(BlockSize); //Allocates a Memory Block
for(int i=0;i<BlockSize;i++){
MemBlock[i]=i*17%23; //Fills the Memory Block with values
}
}
uint8_t* P=MemBlock; //Defines a pointer that points to same location as the MemBlock pointer
for(int i=0;i<BlockSize;i++){
std::cout<<*P++<<' '; //Displays all values of the Memory Block, *P++ is same as doing *P then P++
}
std::cout<<'\n'; //ends the line to show it
free(MemBlock); //Deallocates the Memory Block to prevent "memory leaks"
return 0;
}
The Arduino IDE doesn't require to include the library as it is already included by default.
Here is a link for more details about the library: stdlib.h - C++ reference
If anyone know a fast vector library, then feel free to share!
Related
I am attempting to read from a text file and store all the words in a 2d array but for some reason my code only works when my 2d array is declared as a global variable (which is not allowed for this project). When I put my 2d array in main (as shown in the code example) I get an exit code "Process finished with exit code -1073741571 (0xC00000FD)". Any idea as to why this happening?
char theWord[maxLength]; // declare input space to be clearly larger than largest word
char dict_array[maxWords][maxLength];
while (inStream >> theWord) {
for (int i = 0; i < strlen(theWord); i++) {
dict_array[count][i] = tolower(theWord[i]);
}
if (strlen(theWord) >= 3) {
count++;
}
You've come to the right place to ask this, because 0xC00000FD means Stack Overflow!
Your code fails because char dict_array[maxWords][maxLength] is larger than the available stack space in your process. You can fix this easily by allocating the array using std::vector, operator new[], or malloc(), or you can fix it the hard way by increasing the size of your stack (how to do that depends on your OS, and like I said it's the hard way, don't bother).
The most idiomatic solution to this problem is std::vector<std::string>. Then you can allocate as much memory as your system has, and do so in a safe way with automatic deallocation (so-called RAII).
When it's a global variable it works fine because global variables are not allocated on the stack.
That error indicates a stack overflow. Your arrays are too large to live inside the function. Either make them global or, preferably, allocate them dynamically with new[]
Well, I am writing a program for the university, where I have to put a data dump into HDF format. The data dump looks like this:
"1444394028","1","5339","M","873"
"1444394028","1","7045","V","0.34902"
"1444394028","1","7042","M","2"
"1444394028","1","7077","V","0.0470588"
"1444394028","1","5415","M","40"
"1444394028","1","7077","V","0.462745"
"1444394028","1","7076","B","10001101"
"1444394028","1","7074","M","19"
"1444394028","1","7142","M","16"
"1444394028","1","7141","V","0.866667"
For the HDF5 API I need an array. So my method at the moment is, to write the data dump into an array like this:
int count = 0;
std::ifstream countInput("share/ObservationDump.txt");
std::string line;
if(!countInput) cout << "Datei nicht gefunden" << endl;
while( std::getline( countInput, line ) ) {
count++;
}
cout << count << endl;
struct_t writedata[count];
int i = 0;
std::ifstream dataInput("share/ObservationDump.txt");
std::string line2;
char delimeter(',');
std::string timestampTemp, millisecondsSinceStartTemp, deviceTemp, typeTemp, valueTemp;
while (std::getline(dataInput, timestampTemp, delimeter) )
{
std::getline(dataInput, millisecondsSinceStartTemp, delimeter);
std::getline(dataInput, deviceTemp, delimeter);
std::getline(dataInput, typeTemp, delimeter);
std::getline(dataInput, valueTemp);
writedata[i].timestamp = atoi(timestampTemp.substr(1, timestampTemp.size()-2).c_str());
writedata[i].millisecondsSinceStart = atoi(millisecondsSinceStartTemp.substr(1, millisecondsSinceStartTemp.size()-2).c_str());
writedata[i].device = atoi(deviceTemp.substr(1, deviceTemp.size()-2).c_str());
writedata[i].value = atof(valueTemp.substr(1, valueTemp.size()-2).c_str());
writedata[i].type = *(typeTemp.substr(1, typeTemp.size()-2).c_str());
i++;
}
with struct_t defined as
struct struct_t
{
int timestamp;
int millisecondsSinceStart;
int device;
char type;
double value;
};
As some of you might see, with big data dumps (at about 60 thousand lines) the array writedata tends to generate a stack overflow (segmentation error). I need an array to pass it to my HDF adapter. How can I prevent the overflow? I was not able to find answers by extensive googling. Thanks in advance!
The example code you are following is in C, while the code you are writing is it C++. In most cases, valid C code is valid C++ code, although not necessarily good style; this is one of the times where it is not, although since that isn’t your real problem I’ll leave the explanation of that at the end of my answer.
When you declare struct_t writedata[count];, you are creating an array on the stack. The stack is often artificially limited in size, and so creating a large array on the stack could lead to a problem where you run out of stack space. This is what you are seeing. The typical solution is to create large data structures in the heap (although the primary use of the heap is to make data that lasts past the return of the function that creates it).
The most C++-idiomatic way to access the heap is to not do it directly, but to use a helper container class. In this case what you want is an std::vector, which lets you push data onto the end and will automatically grow as you push on more data. Since it automatically grows, you don’t need to specify the size in advance; just declare it as a std::vector<struct_t> writedata; (read “std::vector of struct_t”). Again, since it doesn’t need to know the size in advance, you can also ignore the whole first loop.
The vector is initially empty; to put data into it, you usually want to use writedata.push_back() or writedata.emplace_back(). The first of these takes an existing struct_t; the second takes the parameters you’d use to create one. All of the elements are stored contiguously in memory, like in a C array, which you can access directly with writedata.data().
At the end of the function, when the vector goes out of scope and is no longer accessible, its destructor will be called and automatically clean up the memory it used.
Another option, instead of using std::vector, is to manage the memory yourself. The C++ way of doing that is with new and delete. The easiest way to handle that is to still calculate count, as you do, but instead of creating the array on the stack by just declaring it as a count-sized array, you do struct_t* writedata = new struct_t[count];. This will create an array of count struct_ts in the heap, and set writedata as a pointer to the first element of this array. Then you can use it as you use the array in your program, but since it’s on the heap you won’t run out of stack space.
The downsides to this are that you need to know the size in advance, and you need to clean up the memory you used yourself. To do this, when you no longer need the data, you should run delete[] writedata. After that, writedata will still point to the same place in memory, but your program no longer owns that data, so you need to make sure to never use that value again; the standard way is to, immediately after deletion, set writedata to nullptr.
You can also use the C equivalents to new and delete, which are malloc and free. They are mostly equivalent in your case, but for more complicated examples you should keep in mind that these leave the memory uninitialized, while new and delete will run the constructors/destructors of what you create to make sure the objects are in a sane state at the beginning and don’t leave resources lying around at the end.
Now for why your original code isn’t actually valid C++ for any size of file: Your line struct_t writedata[count]; tries to create an array of count struct_ts. Since count is a variable, this is called a variable-length array (VLA). Such things are legal in newer versions of C, but not in C++. This alone is just worth a warning as long as you only want to compile the code on the same system you’re currently using, since your compiler seems to support VLAs as an extension. However, if you want to compile your code on any other system (make it more portable), you shouldn’t use compiler extensions like this.
struct_t writedata[count];
This array is allocated on the stack which is normally quite small, and when it gets to a value that's too big (which is semi-arbitrary) this will overflow the stack.
You'd be better off allocating on the heap by doing something like:
struct_t* writedata = (struct_t*)malloc(sizeof(struct_t) * count);
And then add a corresponding call to free once you're finished with the memory, e.g.
free(writedata);
writedata = nullptr;
It's best practice to check that i < count in your while loop, as if you write off the end of your array Bad Things may happen.
So I wrote this program in C++ to solve COJ(Caribbean Online Judge) problem 1456. http://coj.uci.cu/24h/problem.xhtml?abb=1456. It works just fine with the sample input and with some other files I wrote to test it but I kept getting 'Wrong Answer' as a veredict, so I decided to try with a larger input file and I got Segmentation Fault:11. The file was 1000001 numbers long without the first integer which is the number of inputs that will be tested. I know that error is caused by something related to memory but I am really lacking more information. Hope anyone can help, it is driving me nuts. I program mainly in Java so I really have no idea how to solve this. :(
#include <stdio.h>
int main(){
long singleton;
long N;
scanf("%ld",&N);
long arr [N];
bool sing [N];
for(int i = 0; i<N; i++){
scanf("%ld",&arr[i]);
}
for(int j = 0; j<N; j++){
if(sing[j]==false){
for(int i = j+1; i<N; i++){
if(arr[j]==arr[i]){
sing[j]=true;
sing[i]=true;
break;
}
}
}
if(sing[j]==false){
singleton = arr[j];
break;
}
}
printf("%ld\n", singleton);
}
If you are writing in C, you should change the first few lines like this:
#include <stdio.h>
#include <stdlib.h>
int main(void){
long singleton;
long N;
printf("enter the number of values:\n");
scanf("%ld",&N);
long *arr;
arr = malloc(N * sizeof *arr);
if(arr == NULL) {
// malloc failed: handle error gracefully
// and exit
}
This will at least allocate the right amount of memory for your array.
update note that you can access these elements with the usual
arr[ii] = 0;
Just as if you had declared the array as
long arr[N];
(which doesn't work for you).
To make it proper C++, you have to convince the standard committee to add Variable length arrays to the language.
To make it valid C, you have to include <stdbool.h>.
Probably your VLA nukes your stack, consuming a whopping 4*1000001 byte. (The bool only adds a quarter to that) Unless you use the proper compiler options, that is probably too much.
Anyway, you should use dynamic memory for that.
Also, using sing without initialisation is ill-advised.
BTW: The easiest C answer for your programming challenge is: Read the numbers into an array (allocated with malloc), sort (qsort works), output the first non-duplicate.
When you write long arr[N]; there is no way that your program can gracefully handle the situation where there is not enough memory to store this array. At best, you might get a segfault.
However, with long *arr = malloc( N * sizeof *arr );, if there is not enough memory then you will find arr == NULL, and then your program can take some other action instead, for example exiting gracefully, or trying again with a smaller number.
Another difference between these two versions is where the memory is allocated from.
In C (and in C++) there are two memory pools where variables can be allocated: automatic memory, and the free store. In programming jargon these are sometimes called "the stack" and "the heap" respectively. long arr[N] uses the automatic area, and malloc uses the free store.
Your compiler and/or operating system combination decide how much memory is available to your program in each pool. Typically, the free store will have access to a "large" amount of memory, the maximum possible that a process can have on your operating system. However, the automatic storage area may be limited in size , as well as having the drawback that if allocation fails then you have to have your process killed or have your process go haywire.
Some systems use one large area and have the automatic area grow from the bottom, and free store allocations grow from the top, until they meet. On those systems you probably wouldn't run out of memory for your long arr[N], although the same drawback remains about not being able to handle when it runs out.
So you should prefer using the free store for anything that might be "large".
I need to simulate an incremental garbage collection algorithm in C++ or Java. I had a doubt based on this.
As an input (stdin from keyboard), I will be asked to allocate some memory for this code. The syntax would be:
x = alloc(128KB);
My question: is it ok to use malloc for the assignment? Or is there any other way to allocate memory? I had this doubt because, the size can go up to GB for the assignment, so using malloc might not be a good idea I think.
First of all, if you want to prohibit a huge memory allocation, just check users' input value, but I'm not sure how much memory you think as a huge memory. I think you don't worry about that, because if memory allocation failed, malloc and calloc will return a NULL pointer.
Secondly, you can also use 'calloc' for this case.
void calloc(size_t num, size_t size);
'num' is mean elements' count for allocation and 'size' is, of course, the size of element. Below codes have the same result.
ar = (int *)malloc(5 * sizeof(int));
ar = (int *)calloc(5, sizeof(int));
However, if you choose 'calloc', you may manage more logically your code, since you can divide memory quantity by unit and count.
Also, if you use 'calloc', you don't need to use memset for setting memory value to zero.
'calloc' set automatically memory value to zero.
I hope this article can help you.
malloc can allocate as much memory as you wish provided you don't go past ulimits. Give the following a go to test it out:
#include <stdlib.h>
#include <string.h>
#define ONEGB (size_t)(1073741824)
int main() {
char *p;
p = malloc(ONEGB);
if (!p) {
perror("malloc");
}
else {
memset(p, 0, ONEGB);
}
return 0;
}
This question already has answers here:
Segmentation fault on large array sizes
(7 answers)
Closed 4 years ago.
I'm converting a program from fortran to C++.
My code seems to run fine until I add this array declaration:
float TC[100][100][100];
And then when I run it I get a segmentation fault error. This array should only take up 8Mb of memory and my machine has 3 Gb. Is there a problem with this declaration? My c++ is pretty rusty.
That array is about 4 megabyte large. If this definition is inside a function (as local variable), then the compiler tries to store it on the stack, which on most systems cannot grow that large.
The Fortran compiler probably allocated it statically (Fortran routines are not allowed to be called recursively unless explicitly marked as recursive, so static allocation for local variables works there for non-recursive functions), and therefore the error doesn't occur there.
A simple fix would be to explicitly declare the variable static, assuming the Fortran function was not declared recursive. However this may bite you later, if you ever try to call that function recursively from a revised version. So a better solution would probably be to allocate it dynamically. However that costs extra time and therefore depending on the nature of the code, may hurt your performance too much (Fortran code quite often is numerical code where performance matters).
If you choose to make the array static, you can build in a protection against accidental recursive calls:
void yourfunction()
{
static bool active;
static float TC[100][100][100];
assert(!active);
active = true;
// your code
active = false;
}
I'm guessing TC is being allocated as an auto local variable. This means it's being stored on the stack. You don't get 4mb of stack memory, so it's causing a stack overflow.
To solve it, use dynamic allocation with a structured container or new.
This looks like a stack-based declaration. Try allocating from the heap (i.e. using the new operator).
If you are declaring it inside of a function, as a local variable, it may be that you stack is not big enough to fit the array. You may try to allocate in the heap, with new or malloc(), or, if your design allows, make it a global variable.
In C++ the stack has a limited amount of space. MSVC defaults this size to 1MB. If the stack uses more than 1MB it will segfault or stackoverflow or something. You will have to move that structure to dynamic memory. To move it to dynamic memory, you want something like this:
typedef float (bigarray)[100][100][100];
bigarray& TC() {
static bigarray* ptr = NULL;
if (ptr == NULL) {
ptr = new float[100][100][100];
for(int j=0; j<100; j++) {
ptr[j] = new float[100][100];
for(int i=0; i<100; i++)
ptr[j][i] = new float[100];
}
}
return *ptr;
}
That will allocate the structure in dynamic memory the first time it is accessed, as a jagged array. You can get more performance out of a rectangle array, but you have to change types:
typedef std::vector<std::array<std::array<float, 100>, 100> bigarray;
bigarray TC(100);
According to http://cs.nyu.edu/exact/core/doc/stackOverflow.txt, gcc/linux defaults the stack size to 8MB, which isn't big enough for your structure and int main() If you really want to, MSVC has flags to increase the stack sizes up to 32MB. Linux has a ulimit command to increase the stack size up to 32MB.