I'm writing a shell in c++, and I need a constructor for my CommandLine class that parses a command from the istream, and then uses the calloc() and free() system calls to dynamically create argc and argv variables. I don't get how to use the system calls to do this, and no tutorial has helped.
Here's an example very similar to what you're asking for. I found it here: http://www.cplusplus.com/reference/clibrary/cstdlib/calloc
function calloc
void * calloc ( size_t num, size_t size );
Allocate space for array in memory Allocates a block of memory for an
array of num elements, each of them size bytes long, and initializes
all its bits to zero.
The effective result is the allocation of an zero-initialized memory
block of (num * size) bytes.
Parameters
num
Number of elements to be allocated. size
Size of elements.
Return Value A pointer to the memory block allocated by the function.
The type of this pointer is always void*, which can be cast to the
desired type of data pointer in order to be dereferenceable. If the
function failed to allocate the requested block of memory, a NULL
pointer is returned.
Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
/* calloc example */
#include <stdio.h>
#include <stdlib.h>
int main ()
{
int i,n;
int * pData;
printf ("Amount of numbers to be entered: ");
scanf ("%d",&i);
pData = (int*) calloc (i,sizeof(int));
if (pData==NULL) exit (1);
for (n=0;n<i;n++)
{
printf ("Enter number #%d: ",n);
scanf ("%d",&pData[n]);
}
printf ("You have entered: ");
for (n=0;n<i;n++) printf ("%d ",pData[n]);
free (pData);
return 0;
}
Related
Why there is difference in the output of pointer size between the C code and the Arduino IDE
I will simplify your problem:
In your computer:
int *int_pointer;
printf("Size of int_pointer: %d", sizeof(int_pointer)); // Output: 8
In your ESP32:
int *int_pointer;
printf("Size of int_pointer: %d\n", sizeof(int_pointer)); // Output: 4
That is the difference.
A pointer stores a memory address so its size will be (at least) the same as the size of an address in its respective processor.
Your computer is a 64-bit system -> The size of each address is 8 bytes -> The pointer needs to have 8 bytes to be able to store it.
Your ESP32 is a 32-bit system -> The size of each address is 4 bytes -> The pointer only needs to have 4 bytes to be able to store it.
This stuct
typedef struct str
{
int* array;
int y;
int z;
} str;
contains a pointer (int * array). The size of pointers depends on where the code is running. On a 32 bit machine they will be 4 bytes, on a 64 bit machine 8. There can be other sizes to (2 - pdp 11, 6 - some mainframes)
So the sizeof operator will return different values on different systems
int main(){
int * x = new int[3];
for(int i = 0; i < 5; ++i){
x = new int[i];
x[i] = i;
cout << i << endl;
}
}
So say I have an integer array allocated on the heap, with capacity of 3 integers, as seen in x.
Now say I have this for loop where I want to change the values of x into whatever i is.
Now when I run this code by it self it does run, and 0,1,2,3,4 prints like I want.
What I'm wondering is, when I do new int[i] when i is 0, 1, 2, since x[0], x[1], x[2] is already allocated on the heap, am I make three new address in the heap?
Thanks!
int main(){
int * x = new int[3];
for(int i = 0; i < 5; ++i){
x = new int[i];
x[i] = i;
cout << i << endl;
}
}
Running it
-> An array of size 3 is created on the heap
-> An array of size 0 is created on the heap
-> The 0 index of array size 0 is equal to 0
-> Print 0
-> An array of size 1 is created on the heap
-> The 1 index of array size 1 is equal to 1
-> Print 1
.
.
.
-> An array of size 4 is created on the heap
-> The 4 index of array size 4 is equal to 4
-> Print 4
I'm not sure if this is your intention, but as the rest of the comments said, there are memory leaks and undefined behavior.
I think what you're trying to implement is instead
#include <iostream>
#include <vector>
int main()
{
std::vector<int> g1; //initialises a vector called g1
for (int i = 1; i <= 5; i++) {
// Adds i to the vector, expands the vector if
// there is not enough space
g1.push_back(i);
// Prints out i
std::cout<<i<<std::endl;
}
Every time you new something, you are requesting memory and system is allocating a memory block(it may fail, that's another case itself) and giving you a pointer to the initial address of that memory, please remember this when you new something. So here initially you new an array of 3 ints, then in the loop you again new 5 times which returns 5 new memory addresses(which is 5 different memory blocks). So you have 6 new addresses(memory blocks of different sizes) to deal with. It's definitely not the thing you want. So you should use the 1st allocation without any more new in the loop, in that case you should know the bounds of array beforehand. So to make that automatic you can use vector which can grow when you push elements into it.
please refer to this for vector.
a sidenote: when you new something you should take care of that memory yourself, so new is generally not inspired, please look at smart pointers to make your code safe.
Can you allocate memory for an array already on the heap?
Answer: Yes (but not how you are doing it...)
Whenever you have memory already allocated, in order to expand or reduce the allocation size making up a given block of memory, you must (1) allocate a new block of memory of the desired size, and (2) copy the existing block to the newly allocated block (up to the size of the newly allocated block), before (3) freeing the original block. In essence since there is no equivalent to realloc in C++, you simply have to do it yourself.
In your example, beginning with an allocation size of 3-int, you can enter your for loop and create a temporary block to hold 1-int (one more than the loop index) and copy the number of existing bytes in x that will fit in your new tmp block to tmp. You can then delete[] x; and assign the beginning address of the new temporary block of memory to x (e.g. x = tmp;)
A short example continuing from your post could be:
#include <iostream>
#include <cstring>
int main (void) {
int nelem = 3, /* var to track no. of elements allocated */
*x = new int[nelem]; /* initial allocation of 3 int - for fun */
for (int i = 0; i < 5; i++) {
nelem = i + 1; /* update nelem */
/* create temporary block to hold nelem int */
int *tmp = new int[nelem]; /* allocate tmp for new x */
memcpy (tmp, x, i * sizeof *tmp); /* copy i elements to tmp */
delete[] x; /* free x */
x = tmp; /* assign tmp to x */
x[i] = i; /* assign x[i] */
for (int j = 0; j < nelem; j++) /* output all */
std::cout << " " << x[j];
std::cout << '\n';
}
delete[] x; /* free x */
}
(note: on the first iteration zero bytes are copied from x -- which is fine. You can include an if (i) before the memcpy if you like)
Example Use/Output
$ ./bin/allocrealloc
0
0 1
0 1 2
0 1 2 3
0 1 2 3 4
Memory Use/Error Check
In any code you write that dynamically allocates memory, you have 2 responsibilities regarding any block of memory allocated: (1) always preserve a pointer to the starting address for the block of memory so, (2) it can be freed when it is no longer needed.
It is imperative that you use a memory error checking program to ensure you do not attempt to access memory or write beyond/outside the bounds of your allocated block, attempt to read or base a conditional jump on an uninitialized value, and finally, to confirm that you free all the memory you have allocated.
For Linux valgrind is the normal choice. There are similar memory checkers for every platform. They are all simple to use, just run your program through it.
$ valgrind ./bin/allocrealloc
==6202== Memcheck, a memory error detector
==6202== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==6202== Using Valgrind-3.12.0 and LibVEX; rerun with -h for copyright info
==6202== Command: ./bin/allocrealloc
==6202==
0
0 1
0 1 2
0 1 2 3
0 1 2 3 4
==6202==
==6202== HEAP SUMMARY:
==6202== in use at exit: 0 bytes in 0 blocks
==6202== total heap usage: 7 allocs, 7 frees, 72,776 bytes allocated
==6202==
==6202== All heap blocks were freed -- no leaks are possible
==6202==
==6202== For counts of detected and suppressed errors, rerun with: -v
==6202== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Always confirm that you have freed all memory you have allocated and that there are no memory errors.
Look things over and let me know if you have further questions.
I'll just throw your code back at you with some comments that hopefully clear things up a bit.
int main()
{
// Allocate memory for three integers on the heap.
int * x = new int[3];
for(int i = 0; i < 5; ++i)
{
// Allocate memory for i (0-4) integers on the heap.
// This will overwrite the x variable allocated earlier.
// What happens when i is zero?
x = new int[i];
// Set the recently allocated array at x[i] to its index.
x[i] = i;
// Print out current index of 0-4.
cout << i << endl;
}
}
My goal is to make reverse two digits like 123456 to 563412.
I'm using valgrind tool to check memory leakage problem but strlen(reverse_chr) function makes this error:
Conditional jump or move depends on uninitialized value(s)
Here is my code:
#include <stdio.h>
#include <string.h>
#include <string>
int main()
{
char chr[] = "123456";
char* reverse_chr=(char *) malloc(strlen(chr)+1);
memset(reverse_chr, 0, strlen(chr));
int chrlen=strlen(chr);
for (int t=0; t<chrlen; t+=2)
{
reverse_chr[t]=chr[chrlen-t-2];
reverse_chr[t+1]=chr[chrlen-t-1];
}
int len_reverse_chr = strlen(reverse_chr);
free(reverse_chr);
return 0;
}
I expect output without any valgrind error.
The problem is that reverse_chr is not a valid string as it's not properly terminated.
char* reverse_chr=(char *) malloc(strlen(chr)+1);
memset(reverse_chr, 0, strlen(chr));
You allocate 7 bytes, but only set the first 6 to 0.
for (int t=0; t<chrlen; t+=2)
{
reverse_chr[t]=...
reverse_chr[t+1]=...
This for loop also only writes to the first 6 elements of reverse_chr.
int len_reverse_chr = strlen(reverse_chr);
Then this line tries to find a NUL byte in reverse_chr, but the first 6 elements aren't '\0' and the 7th is uninitialized (hence the complaint by valgrind).
Fix:
Either do
reverse_chr[chrlen] = '\0';
after the loop, or use calloc:
reverse_chr = static_cast<char *>(calloc(strlen(chr)+1, sizeof *reverse_chr));
This way all allocated bytes are initialized (and you don't need memset anymore).
I am using cudaMemGetInfo in order to get the vram currently used by the system.
extern __host__ cudaError_t CUDARTAPI cudaMemGetInfo(size_t *free, size_t *total);
And I am having two problems :
the main is that the free value returned is only right when the graphic device has almost no memory free for allocation. Otherwise it remains at about 20% memory used even when GPU-Z clearly states that about 80 % is used. And when I reach 95% memory used cudaMemGetInfo suddenly returns a good value. Note that the total memory is always correct.
the second problem is that as soon as I use the function, video memory is allocated. At least 40mbytes but it can reach 400 on some graphic devices.
My code :
#include <cuda_runtime.h>
size_t Profiler::GetGraphicDeviceVRamUsage(int _NumGPU)
{
cudaSetDevice(_NumGPU);
size_t l_free = 0;
size_t l_Total = 0;
cudaError_t error_id = cudaMemGetInfo(&l_free, &l_Total);
return (l_Total - l_free);
}
I tried with 5 different nvidia graphic devices. The problems are always the same.
Any idea ?
On your first point, I cannot reproduce this. If I expand your code into a complete example:
#include <iostream>
size_t GetGraphicDeviceVRamUsage(int _NumGPU)
{
cudaSetDevice(_NumGPU);
size_t l_free = 0;
size_t l_Total = 0;
cudaError_t error_id = cudaMemGetInfo(&l_free, &l_Total);
return (l_Total - l_free);
}
int main()
{
const size_t sz = 1 << 20;
for(int i=0; i<20; i++) {
size_t before = GetGraphicDeviceVRamUsage(0);
char *p;
cudaMalloc((void **)&p, sz);
size_t after = GetGraphicDeviceVRamUsage(0);
std::cout << i << " " << before << "->" << after << std::endl;
}
return cudaDeviceReset();
}
I get this on a linux machine:
$ ./meminfo
0 82055168->83103744
1 83103744->84152320
2 84152320->85200896
3 85200896->86249472
4 86249472->87298048
5 87298048->88346624
6 88346624->89395200
7 89395200->90443776
8 90443776->91492352
9 91492352->92540928
10 92540928->93589504
11 93589504->94638080
12 94638080->95686656
13 95686656->96735232
14 96735232->97783808
15 97783808->98832384
16 98832384->99880960
17 99880960->100929536
18 100929536->101978112
19 101978112->103026688
and I get this on a Windows WDDM machine:
>meminfo
0 64126976->65175552
1 65175552->66224128
2 66224128->67272704
3 67272704->68321280
4 68321280->69369856
5 69369856->70418432
6 70418432->71467008
7 71467008->72515584
8 72515584->73564160
9 73564160->74612736
10 74612736->75661312
11 75661312->76709888
12 76709888->77758464
13 77758464->78807040
14 78807040->79855616
15 79855616->80904192
16 80904192->81952768
17 81952768->83001344
18 83001344->84049920
19 84049920->85098496
Both seem consistent to me.
On your second point: cudaSetDevice establishes a CUDA context on the device number which you pass to it, if no context already exists. Establishing a CUDA context will reserve memory for the runtime components required to run CUDA code. So it is completely normal that calling the function will consume memory if it is the first CUDA API containing function you call.
I have a need to allocate all memory available to a process, in order to implement a test of a system service. The test (among others) requires exhausting all available resources, attempting the call, and checking for a specific outcome.
In order to do this, I wrote a loop that reallocates a block of memory until, realloc returns null, then using the last good allocation, then cutting the difference between the last successful quantity and the last unsuccessful quantity until the unsuccessful quantity is 1 byte larger than the last successful quantity, guaranteeing that all available memory is consumed.
The code I wrote is as follows (debug prints also included)
#include <stdio.h>
#include <malloc.h>
int main(void)
{
char* X;
char* lastgood = NULL;
char* toalloc = NULL;
unsigned int top = 1;
unsigned int bottom = 1;
unsigned int middle;
do
{
bottom = top;
lastgood = toalloc;
top = bottom*2;
printf("lastgood = %p\ntoalloc = %p\n", lastgood, toalloc);
if (lastgood != NULL)
printf("*lastgood = %i\n", *lastgood);
toalloc = realloc(toalloc, top);
printf("lastgood = %p\ntoalloc = %p\n", lastgood, toalloc);
if (toalloc == NULL && lastgood != NULL)
printf("*lastgood = %i\n", *lastgood); //segfault happens here
}while(toalloc != NULL);
do
{
if (toalloc != NULL) lastgood = toalloc;
else toalloc = lastgood;
middle = bottom+(top - bottom)/2;
toalloc = realloc(toalloc, middle);
if (toalloc == NULL) top = middle;
else bottom = middle;
}while(top - bottom > 1);
if (toalloc != NULL) lastgood = toalloc;
X = lastgood;
//make a call that attempts to get more memory
free(X);
}
According to realloc's manpage, realloc does not destroy the previous address if it returns null. Even so, this code results in a segfault when it tries to print lastgood after toalloc receives NULL from realloc. Why is this happening, and is there a better way to just grab the exact quantity of unallocated memory?
I am running it on glibc, on ubuntu with kernel 3.11.x
You are not checking the value of top for overflow. This is what happens with its value:
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
1048576
2097152
4194304
8388608
16777216
33554432
67108864
134217728
268435456
536870912
1073741824
2147483648
0
Just before the last realloc(), the new value of top is 0 again (actually 2^32 but that doesn't fit in 32 bits), which seems to cause the memory block to actually deallocate.
Trying to allocate the maximum contiguous block is not a good idea. The memory map seen by the user process has some block already allocated for shared libraries, and the actual code and data of the current process. Unless you want to know the maximum contiguous memory block you can allocate, the way to go is to allocate as much as you can in a single block. When you have reached that, do the same with a different pointer, and keep doing that until you really run out of memory. Take into account that in 64-bit systems, you don't get all available memory in just one malloc()/realloc(). As I've just seen, malloc() in 64-bit systems allocate up to 4GB of memory in one call, even when you can issue several mallocs() and still succeed on each of them.
A visual of a user process memory map as seen in a 32-bit Linux system is described in an answer I gave a few days ago:
Is kernel space mapped into user space on Linux x86?
I've come up with this program, to "eat" all the memory it can:
#include <stdio.h>
#include <malloc.h>
typedef struct slist
{
char *p;
struct slist *next;
} TList;
int main(void)
{
size_t nbytes;
size_t totalbytes = 0;
int i = 0;
TList *list = NULL, *node;
node = malloc (sizeof *node);
while (node)
{
node->next = list;
list = node;
nbytes = -1; /* can I actually do this? */
node->p = malloc(nbytes);
while (nbytes && !node->p)
{
nbytes/=2;
node->p = malloc(nbytes);
}
totalbytes += nbytes + sizeof *node;
if (nbytes==0)
break;
i++;
printf ("%8d", i);
}
printf ("\nBlocks allocated: %d. Memory used: %f GB\n",
i, totalbytes/(1024*1048576.0));
return 0;
}
The execution yields these values in a 32-bit Linux system:
1 2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
51 52 53
Blocks allocated: 53. Memory used: 2.998220 GB
Very close to the 3GB limit on 32-bit Linux systems. On a 64-bit Linux system, I've reached 30000 blocks of 4GB each, and still counting. I don't really know if Linux can allocate that much memory or it's my mistake. According to this, the maximum virtual address space is 128TB (that would be 32768 4GB blocks)
UPDATE: in fact, so it is. I've left running this program on a 64-bit box, and after 110074 succesuflly allocated blocks, the grand total of memory allocated has been 131071.578884 GB . Each malloc() has been able to allocate as much as 4 GB per operation, but when reached 115256 GB, it has begun to allocate up to 2 GB, then when it reached 123164 GB allocated, it begun to allocate up to 1GB per malloc(). The progression asyntotically tends to 131072 GB, but it actually stops a little earlier, 131071.578884 GB because the very process, its data and shared libraries, uses a few KB of memory.
I think getrlimit() is the answer to your new question. Here is the man page http://linux.die.net/man/2/getrlimit
You are probably interested in RLIMIT_STACK. Please check the man page.
first part is replaced with the following.
char *p;
size_t size = SIZE_MAX;//SIZE_MAX : #include <stdint.h>
do{
p = malloc(size);
size >>= 1;
}while(p==NULL);