Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Are there any advantage of using normal arrays over array of pointers(and vice-versa)?
When should I use array of pointers and when should I avoid it?
In college, everyone seems to be crazy about pointers. I know std::vector is an easy way out, but it is prohibited to use in college. We're asked to solve stuff without STL for now.
I got an answer from SO (link), but the answer went way over my head.
For example: Is using int array[] better or is int* parray[] better?
int array[] is an array of an int. What it means is it will hold a collection of multiple integer numbers. Imagine it as a place holder that holds a number of integers. When you use int array[] in C++, you must give it a fixed size before you use it:
int array[5]
and the size will be put inside the square bracket [], otherwise it won't compile and will give you error. The disadvantage of using this normal array is you have to know the size of the array first, otherwise the program won't run. What if your estimation size is different from actual use ? What if your estimation is much much larger than the real value ? It will cost you a lot of memory.
int *array[] is not valid in C++. If you want to do a pointer to an array without knwoing the size of the array at run time. Do this:
int *p;
int size;
cout << "How big is the size ?";
cin >> size;
p = new int[size];
That way, you don't need to know the value of size before run time, thus you won't waste memory.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I want to half the number of elements that the data-structure contains. And I have to do that multiple times.
The problem is similar to this:
I have 'n' sorted integers and I have to add consecutive two numbers. Thus the number of the number of integers I am left with is n/2.And I have to loop this till I get a single number. ( I simplified the problem, I have to do other operations side by side)
I thought trying an array of size n, then thought I will create an array of size n/2 and will fill this new array, and finally will free the original array(was created using a pointer). Note that I also have to store the data I evaluated each time through the loop.
If I am not able to explain, please refer to this problem
MIXTURE
Use a pointer to memory allocated with malloc (or calloc, or similar), then resize it with realloc:
int main()
{
int* myArray = malloc(50 * sizeof(int)); // gets you 50 integers
// perform operations on myArray, accessing it like myArray[3]
int* r = realloc(myArray, 25 * sizeof(int));
if (r) {
myArray = r;
}
// perform some more operations.
free(myArray); // free the memory once you are done with it
}
realloc returns a new pointer that points to memory of the same content as the pointer you passed before. Assign the result back to myArray like this to get the desired behavior, and when you don't need the array anymore, call free on it just like you would do normally if you haven't reallocated it.
Which data structure can i use if i want to reduce its memory by half during the execution
I want to half the number of elements that the data-structure contains. And I have to do that multiple times.
You can use any dynamically sized data structure, which is pretty much all data structures except the statically sized array.
Although, note that reducing the size of memory allocated for std::vector (upon calling shrink_to_fit) is technically not guaranteed to happen. Example using vector:
vec.resize(vec.size() / 2);
vec.shrink_to_fit();
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is there a way to take two pieces of heap allocated memory, and put them together efficiently? Would this be more efficient than the following?
for( unsigned int i = 0; i < LENGTH_0; ++i )
AddToArray( buffer, array0[ i ] );
for( unsigned int i = 0; i < LENGTH_1; ++i )
AddToArray( buffer, array1[ i ] );
For copying memory byte by byte, you can't go wrong with memcpy. That's going to be the fastest way to move memory.
Note that there are several caveats, however. For one, you have to manually ensure that your destination memory is big enough. You have to manually compute sizes of objects (with the sizeof operator). It won't work well with some objects (shared_ptr comes to mind). And it's pretty gross looking in the middle of some otherwise elegant C++11.
Your way works too and should be nearly as fast.
You should strongly consider C++'s copy algorithm (or one of its siblings), and use vectors to resize on the fly. You get to use iterators, which are much nicer. Again, it should be nearly as fast as memcpy, with the added benefit that it is far, far safer than moving bytes around: shared_ptr and its ilk will work as expected.
I'd do something like this until proven to slow:
vector<decltype(*array0)> vec;
vec.reserve(LENGTH_0 + LENGTH_1);
vec.insert(vec.end(),array0,array0 + LENGTH_0);
vec.insert(vec.end(),array1,array1 + LENGTH_1);
Depending on the data stored in array1 and array0 that might be as fast or even faster than calling a function for every single data.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i read in a full file into a string. This is very quick. (For a example 180Mb file - 2s)
Then i extract some values from the string using >> operator and create several arrays from it and insert the arrays into a struct and add each struct into a vector.
I'm trying to find the bottleneck, because this is very slow (but maybe you cant do anything)
is the >> approach fast?
string str; // gets filled with the file
struct A;
std::vector<A> b; // global variables
// in the function inside the loop
str >> a.val
A a;
b.push_back(a);
Does the vector take ownership of the a or does it make a copy? Is a still on the stack? I have about 60.000 structs that get insert into the vector. Is this a fast approach or is there a better one.
Question is the >> approach fast?
Answer Fast is relative. What do you compare it with?
Question Does the vector take ownership of the a or does it make a copy?
Answer std::vector::push_back() makes a copy of the input object.
Question Is a still on the stack?
Answer Judging solely by the posted code, yes, both A and b are on the stack.
Queston I have about 60.000 structs that get insert into the vector. Is this a fast approach or is there a better one?
Answer You might gain some performance by creating the b with the required size and reading the data directly into the objects in b.
std::vector<A> b(60000);
for ( i = 0; ; ++i /* Use whatever looping construct you can */ )
{
str >> b[i].val;
}
Update
If you are able to, writing and reading the data in binary form will be the fastest. Use std::ostream::write() to write the data and std::istream::read() to read the data.
C I/O will often be faster than C++ I/O. Try parsing chunks of data with fscanf() (see: http://www.cplusplus.com/reference/cstdio/fscanf/) and you'll likely find the C approach runs a lot faster.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I was writing code for my homework. So as I finished one of my classes I ran into a question. Is having a loop to assign values to array a good idea?
This is my class. I was thinking of making either loop in the constructor or create a function which would assign values later, by calling it manually.
Are these choices different? if yes, Which choice is better and why?
class Mule
{
private:
int numofMules;
int MapSize;
MuleNode* field;
MuleNode* mules;
public:
void random_positions();
void motion();
void print_field();
Mule(int nofMules, int mSize)
{
numofMules = nofMules;
MapSize = mSize;
mules = new MuleNode[numofMules];
field = new MuleNode[MapSize*MapSize];
for(i = 0; i < numofMules; i++)
{
mules[i].ID = i+1;
}
random_positions();
}
}
Edited the code because of the problem with allocation of one dimensional array at compilation time and recreated 2 dimensional array in 1 dimensional using formulas.
+---------------+-----------+-----------+
| i = j * w + i | x = i % w | y = i / w | w - width of the 2 dimentional array
+---------------+-----------+-----------+
Conclusion: As the question was marked as opinion-based, I guess it means that there is no big difference in using loop in the constructor or creating a function which would assign values later.
If there are any facts or opinions about this question worth sharing, please comment or write your answer.
There's not necessarily anything terrible about having a loop in a ctor.
At the same time, it's worth considering whether those items you're initializing couldn't/shouldn't be objects that know how to initialize themselves instead of creating uninitialized instances, then writing values into them.
As you've written it, the code doesn't really seem to make much sense though. The class name is Mule, but based on the ctor, it's really more like a collection of Mules. A Mule should be exactly that: one mule. A collection of N mules should be something like a std::vector<Mule>. A Mule that's really a collection of Mules is a poor idea.
You should also at least consider using std::vector instead of an array (assuming that you end up with a collection of items in the class at all, of course).
In general, not a good idea, but some constructors require a loop (example, initializing an array in heap, which is initialized in the constructor). But not all constructors are called so often (singletons, for example, called only once per process).
In the end, it depends on the class and program/object design.
Your particular class appears like it will be created only once per process. So my take is that it is OK. If that is not the case, then we have to evaluate it on a case-by-case basis.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As far as I can understand, RAM is organized like a net of rows and columns of cells, each cell containing 1 byte. Also, each cell is label with an address memory written in hexadecimal system. Is this so? Now, when running a c++ program, I suppose it uses the RAM as a mean of storage. In this case, as the char type on c++ is the basic unit of storage, is this size of a char exactly the same as the cell (1 byte)?, does the size of a char depends on the size of a cell (in case the size of a cell is not 1 byte)?, does it depend on the compiler? Thank you so much.
It is easy to visualize RAM as a net of rows and columns. This is how most CS classes teach students as well and for most purposes this would do well at a conceptual level.
One thing you must know while writing C++ programs is the concept of 2 different memories: stack and heap. Stack is memory that stores variables when they come in scope. When they go out of scope, they are removed. Think of this as a stack implementation (FIFO).
Now, heap memory is slightly more complicated. This does not have anything to do with scope of the variable. You can set a fixed memory location to contain a particular value and it will stay there until you free it up. You can set the heap memory by using the 'new' keyword.
For instance: int* abc = new int(2);
This means that the pointer abc points to a heap location with the value '2'. You must explicitly free the memory using the delete keyword once you are done with this memory. Failure to do so would cause memory leaks.
In C, the type of a character constant like a is actually an int, with size of 4. In C++, the type is char, with size of 1. The size is NOT dependent on compiler. The size of int, float and the like are dependent on the configuration of your system (16/32/64-bit). Use the statement:
int a=5;
cout<<sizeof(a)<<endl;
to determine the size of int in your system.