Difference dynamic static 2d array c++ - c++

Im using opensource library called wxFreeChart to draw some XY charts. In example there is code which uses static array as a serie :
double data1[][2] = {
{ 10, 20, },
{ 13, 16, },
{ 7, 30, },
{ 15, 34, },
{ 25, 4, },
};
dataset->AddSerie((double *) data1, WXSIZEOF(dynamicArray));
WXSIZEOF ismacro defined like: sizeof(array)/sizeof(array[0])
In this case everything works great but in my program Im using dynamic arrays (according to users input).
I made a test and wrotecode like below:
double **dynamicArray = NULL;
dynamicArray = new double *[5] ;
for( int i = 0 ; i < 5 ; i++ )
dynamicArray[i] = new double[2];
dynamicArray [0][0] = 10;
dynamicArray [0][1] = 20;
dynamicArray [1][0] = 13;
dynamicArray [1][1] = 16;
dynamicArray [2][0] = 7;
dynamicArray [2][1] = 30;
dynamicArray [3][0] = 15;
dynamicArray [3][1] = 34;
dynamicArray [4][0] = 25;
dynamicArray [4][1] = 4;
dataset->AddSerie((double *) *dynamicArray, WXSIZEOF(dynamicArray));
But it doesnt work correctly. I mean point arent drawn. I wonder if there is any possibility that I can "cheat" that method and give it dynamic array in way it understands it and will read data from correct place
thanks for help

You can't use the WXSIZEOF macro on dynamically allocated arrays. That's for determining the size of an array, you have a pointer to an array :) You can't use that trick with non-compile-time constant arrays.
The parameter wants the number of pairs in the array - and uses a tricky macro to figure it out (using the macro is better for maintainability - there's only one place that uses the size constant).
You can probably simply pass 5 to the function (or whatever variable you use to determine the size of your array).
(I should add that I'm not familiar with this particular API... and it could be doing something funky that would make this not work... but I doubt it)
EDIT. It appears (from some comments) that this function does require contiguous storage.
I don't think you need to write your own function to put these elements contiguous in memory. That would be a lot of reallocation and copying. More likely, you should be using a different class. After browsing their very minimal documentation, it looks like you can use XYDynamicSerie to build a dynamic list of points, then adding it to an XYDynamicDataset or something.

If You define an array like
double myArr[5][2];
All cells occupy a continuous chunk of memory and I'm pretty sure dataset->AddSerie relies on that.
You can't guarantee that if you allocate memory in chunks, using consecutive calls to new.
My proposition is to write a simple class that allocates a continuous chunk of memory for storage and uses operator() to access that memory as a two dimensional array using 2 indices. Internally You can use a vector<double> to manage the storage, and You can pass the address of the first element of that vector to dataset->AddSerie
Please check the code in this C++ FAQ example and try to understand it. The matrix example uses new[] and delete[]. You should use a vector instead, and the type double instead of Fred
Where in the example, there is a private section like this
class Matrix {
public:
...
private:
unsigned nrows_, ncols_;
Fred* data_;
};
(The example shows a matrix of Freds) You should use a vector<double>
class Matrix {
public:
...
private:
unsigned nrows_, ncols_;
vector<double> data_;
};
That will make the code much simpler. You don't even need a destructor, because the vector manages the memory.

Using #Stephen's answer I created xy plot which can easyly process various data without messing with convertion to (double *) and SIZE<...> macro. Maybe this chunk of code will be interesting for someone.
...
// create plot
XYPlot *plot = new XYPlot();
// create dynamic dataset and serie
XYDynamicDataset *ddataset = new XYDynamicDataset();
XYDynamicSerie *dds = new XYDynamicSerie();
///add values. you can simply grab this data from other sources
/// such as std::vector filled by user
dds->AddXY(1.1, 1.1);
dds->AddXY(3.1, 2.1);
dds->AddXY(5.1, 1.8);
ddataset->AddSerie(dds);
ddataset->SetRenderer(new XYLineRenderer());
plot->AddDataset(ddataset);
...

Related

C++ Function Alters Value of Passed Parameter

I have a simple swapping function to take an integer array, and return a new array with swapped values.
int* Node::dataSwap(int *data, int n_index, int swap_index){
printDatt(data);
int *path = data;
int swapped = data[n_index];
int to_swap = data[swap_index];
path[n_index] = to_swap;
path[swap_index] = swapped;
printDatt(data);
return path;
}
However, the reference to the original data is being altered by this function. The output looks something like this (printing the should be the same data to console).
0, 1, 2
3, 4, 5
6, 7, 8
0, 1, 2
3, 4, 8
6, 7, 5
Why is "data" being changed when I am not changing it? Is "path" a reference to the actual mem addr of "data"?
The type of the argument data and the local variable path is int *. You can read this as "pointer to int".
A pointer is a variable holding a memory address. Nothing more, nothing less. Since you set path = data, those two pointers are equal.
In your mind, data is an array. But that's not what the function dataSwap is seeing. To the function dataSwap, its argument data is just a pointer to an int. This int is the first element of your array. You accessed elements of the array using data[n_index]; but that's just a synonym for *(data + n_index).
How to remedy to your problem?
The C way: malloc and memcpy
Since you want to return a new array, you should return a new array. To do this, you should allocate a new region of memory with malloc, and then copy the values of the original array to the new region of memory, using memcpy.
Note that it is impossible to do this using only the current arguments of the function, since none of those arguments indicate the size of the array:
data is a pointer to the first element of the array;
n_index is the index of one of the elements in the array;
swap_index is the index of another element in the array.*
So you should add a fourth element to the function, int size, to specify how many elements are in the array. You can use size as argument to malloc and memcpy, or to write a for loop iterating over the elements of the array.
New problem arising: if you call malloc to allocate new memory, then the user will have to call free to free the memory at some point.
C++ has the cool keyword new whose syntax is somewhat lighter than the syntax of malloc. But this doesn't solve the main problem; if you allocate new memory with the keyword new, then the user will have to free the memory with the keyword delete at some point.
Urgh, so much burden!
But this was the C way. A good rule of thumb in C++ is: never handle arrays manually. The standard library has std::vector for that. There are situations where using new might be the best solution; but in most simple cases, it isn't.
The C++ way: std::vector
Using the class std::vector from the standard library, your code becomes:
#include <vector>
std::vector<int> Node::dataSwap(std::vector<int> data, int n_index, int swap_index)
{
std::vector<int> new_data = data;
int swapped = data[n_index];
int to_swap = data[swap_index];
new_data[n_index] = to_swap;
new_data[swap_index] = swapped;
return (new_data);
}
No malloc, no new, no free and no delete. The class std::vector handles all that internally. You don't need to manually copy the data either; the initialisation new_data = data calls the copy constructor of class std::vector and does that for you.
Avoid using new as much as you can; use a class that handles all the memory internally, like you would expect it in a higher-level language.
Or, even simpler:
The C++ way: std::vector and std::swap
#include <vector>
#include <algorithm>
std::vector<int> Node::dataSwap(std::vector<int> data, int n_index, int swap_index)
{
std::vector<int> new_data = data;
std::swap(new_data[n_index], new_data[swap_index]);
return (new_data);
}
Is "path" a reference to the actual mem addr of "data"?
Yes! In order to create a new array that is a copy of the passed data (only with one pair of values swapped over), then your function would need to create the new array (that is, allocate data for it), copy the passed data into it, then perform the swap. The function would then return the address of that new data, which should be freed later on, when it is no longer needed.
However, in order to do this, you would need to also pass the size of the data array to the function.
One way to do this, using 'old-style' C++, is with the new operator. With the added 'size' parameter, your function would look something like this:
int* Node::dataSwap(int *data, int n_index, int swap_index, int data_size)
{
printDatt(data);
int *path = new int[data_size]; // Create new array...
for (int i = 0; i < data_size; ++i) path[i] = data[i]; // ... and copy data
int swapped = data[n_index];
int to_swap = data[swap_index];
path[n_index] = to_swap;
path[swap_index] = swapped;
printDatt(data);
return path; // At some point later on, your CALLING code would "delete[] path"
}
You are changing the memory at which the pointer path point and that is data. I think try to understand better how the pointers works will help you. :)
Then you can use the swap function from the std library:
std::swap(data[n_index], data[swap_index]);
It will make your code nicer.

How to dynamically create a c++ array with known 2nd dimension?

I have a function:
void foo(double[][4]);
which takes a 2d array with 2nd dimension equal to 4. How do I allocate a 2d array so that I can pass it to the function? If I do this:
double * arr[4];
arr = new double[n][4];
where n is not known to the compiler. I cannot get it to compile. If I use a generic 2d dynamic array, the function foo will not take it.
As asked, it is probably best to use a typedef
typedef double four[4];
four *arr; // equivalently double (*arr)[4];
arr = new four[n];
Without the typedef you get to be more cryptic
double (*arr)[4];
arr = new double [n][4];
You should really consider using standard containers (std::vector, etc) or containers of containers though.
typedef double v4[4];
v4* arr = new v4[n];
Consider switching to arrays and vectors though.
I know it may not be what OP has intended to do, but it may help others that need a similar answer.
You are trying to make a dynamic array of statically success array. The STL got your solution: std::vector and std::array
With these containers, things are easy easier:
std::vector<std::array<int, 4>> foo;
// Allocate memory
foo.reserve(8);
// Or instead of 8, you can use some runtime value
foo.reserve(someSize);
// Or did not allocated 8 + someSize, but ensured
// that vector has allocated at least someSize
// Add entries
foo.push_back({1, 2, 3, 4});
// Looping
for (auto&& arr : foo) {
arr[3] = 3;
}
// Access elements
foo[5][2] = 2;
Alternatively to creating a new type and occupying a symbol, you can create a pointer to pointer, and do it like that:
double **arr = new double*[j];
for (int i = 0; i < j; ++i)
{
arr[i] = new double[4];
}
whereas j is the int variable that holds the dynamic value.
I've written a simple code that shows it working, check it out here.

Is it worth to use vector in case of making a map

I have got a class that represents a 2D map with size 40x40.
I read some data from sensors and create this map with marking cells if my sensors found something and I set value of propablity of finding an obstacle. For example when I am find some obstacle in cell [52,22] I add to its value for example to 10 and add to surrounded cells value 5.
So each cell of this map should keep some little value(propably not bigger). So when a cell is marked three times by sensor, its value will be 30 and surronding cells will have 15.
And my question is, is it worth to use casual array or is it better to use vector even I do not sort this cells, dont remove them etc. I just set its value, and read it later?
Update:
Actually I have in my header file:
using cell = uint8_t;
class Grid {
private:
int xSize, ySize;
cell *cells;
public:
//some methods
}
In cpp :
using cell = uint8_t;
Grid::Grid(int xSize, int ySize) : xSize(xSize), ySize(ySize) {
cells = new cell[xSize * ySize];
for (int i = 0; i < xSize; i++) {
for (int j = 0; j < ySize; j++)
cells[x + y * xSize] = 0;
}
}
Grid::~Grid(void) {
delete cells;
}
inline cell* Grid::getCell(int x, int y) const{
return &cells[x + y * xSize];
}
Does it look fine?
I'd use std::array rather than std::vector.
For fixed size arrays you get the benefits of STL containers with the performance of 'naked' arrays.
http://en.cppreference.com/w/cpp/container/array
A static (C-style) array is possible in your case since the size in known at compile-time.
BUT. It may be interesting to have the data on the heap instead of the stack.
If the array is a global variable, it's ugly an bug-prone (avoid that when you can).
If the array is a local variable (let say, in your main() function), then a stack overflow may occur. Well, it's very unlikely for a 40*40 array of tiny things, but I'd prefer have my data on the heap, to keep things safe, clean, and future-proof.
So, IMHO you should definitely go for the vector, it's fast, clean and readable, and you don't have to worry about stack overflow, memory allocation, etc.
About your data. If you know your values are storable on a single byte, go for it !
An uint8_t (same as unsigned char) can store values from 0 to 255. If it's enough, use it.
using cell = uint8_t; // define a nice name for your data type
std::vector<cell> myMap;
size_t size = 40;
myMap.reserve(size*size);
side note: don't use new[]. Well, you can, but it has no advantages over a vector. You will probably only gain headaches handling memory manually.
Some advantages of using a std::vector is that it can be dynamically allocated (flexible size, can be resized during execution, etc) and can be passed/returned from a function. Since you have a fixed size 40x40 and you know you have one element int in every cell, I don't think it matters that much in your case and I would NOT suggest using a class object std::vector to process this simple task.
And here is a possible duplicate.

Arbitrary Dimensional Array

So I'm trying to create an n-dimensional array structure for use in a maze generating program.
I've simplified my problem (for the purposes of trying to get the theory figured out before making it templatized and adding all the necessary helper functions)
So my problem currently boils down to wanting to make an ArbitraryArray class that takes in an argument to its constructor specifying the number of dimensions. Each dimension will have length = 5. (for now)
This is what I have so far:
class ArbitraryArray{
public:
int array[5];
ArbitraryArray*subArray;
ArbitraryArray(){}
ArbitraryArray(int depth){
if (depth == 2) subArray = new ArbitraryArray[5];
else if (depth > 2) for (int i = 0; i < 5; i++) subArray = new ArbitraryArray(depth - 1);
}
};
And I'd create a 2 dimensional object like so:
ArbitraryArray testArray(2);
Or a 3 dimensional object like so:
ArbitraryArray testArray(3);
Problem is, when I tested it for depth = 3 and then tried to set an integer value, via:
testArray.subArray[3].subArray[4].array[4] = 7;
I received a runtime error, leading me to believe that I'm doing something wrong in how I allocate these objects dynamically.
Also, I included an empty default constructor since that gets called by lines like:
subArray = new ArbitraryArray[5];
I'm aware this may not be the best way to go about creating an arbitrary dimensional array data structure, but I'd really like to figure out why this implementation is not working before potentially looking for better methods.
Also I am aware I shouldn't have a line like:
int array[5];
And that it should be a pointer instead so that there isn't a ton of wasted memory allocation for all the levels of the array above the bottom dimension. And I intend to modify it to that after I get this basic idea working.
How about using std::vector for allocating the correct amount of blank memory, which would be
sizeof(T) * dim1 * dim2 * dim3 * ...
Then write a helper class which takes care of the indexing, i.e., it will compute i from given (x,y,z,...), whatever many dimensions you might have.
The beauty of this approach, IMHO, lies in not having to fiddle with pointers, and the helper class simply implements an indexing scheme of your preference (row major or column major).
EDIT
When using std::valarray, things may become easier, as you can use std::slice and/or std::gslice to calculate your indexing for you.
Haven't compiled anything, just visual inspection. What about this:
template<int array_length>
class ArbitraryArray{
public:
int array[array_length];
ArbitraryArray ** subArray;
ArbitraryArray(){}
ArbitraryArray(int depth){
if (depth == 1)
subArray = 0;
else {
subArray = new ArbitraryArray*[array_length];
for (int i = 0; i < array_length; i++)
subArray[i] = new ArbitraryArray(depth - 1);
}
}
};
Well, for once, if depth is greater than 2, you create five ArbitraryArrays, but you save all their pointers in one SubArray pointer. SubArray needs to be an array of pointers to ArbitraryArrays, try ArbitraryArray *subArray[5]; and for (int i = 0; i < 5; i++) subArray[i] = new ArbitraryArray(depth - 1) and see what happens.
In your example you are creating an array that is all over the place in memory instead of one array that is stored in a continuous block of memory. This could cause some issues depending on you handle the memory. e.g. using memcpy on it will never work.
I think a little more flexible approach would be create one large array and instead have an index into the array based on the number of dimensions
int n = static_cast<int>(pow( 5.0, static_cast<double>(depth) ));
Type* a = new Type[ n ];
i.e. since you base your array size on 5, a 2-dim size would be 5x5 and a 3-dim 5x5x5
to access an element in the array say a[2,2,3] (0-based) it could be calculated as
a[2*5*5 + 2*5 + 3]
Just use the Boost multi_array class. It is very flexible, efficient and can perform bounds checking.
Boost Multi-Array

Incrementally dynamic allocation of memory in C/C++

I have a for-loop that needs to incrementally add columns to a matrix. The size of the rows is known before entering the for-loop, but the size of the columns varies depending on some condition. Following code illustrates the situation:
N = getFeatureVectorSize();
float **fmat; // N rows, dynamic number of cols
for(size_t i = 0; i < getNoObjects(); i++)
{
if(Object[i] == TARGET_OBJECT)
{
float *fv = new float[N];
getObjectFeatureVector(fv);
// How to add fv to fmat?
}
}
Edit 1 This is how I temporary solved my problem:
N = getFeatureVectorSize();
float *fv = new float[N];
float *fmat = NULL;
int col_counter = 0;
for(size_t i = 0; i < getNoObjects(); i++)
{
if(Object[i] == TARGET_OBJECT)
{
getObjectFeatureVector(fv);
fmat = (float *) realloc(fmat, (col_counter+1)*N*sizeof(float));
for(int r=0; r<N; r++) fmat[col_counter*N+r] = fv[r];
col_counter++;
}
}
delete [] fv;
free(fmat);
However, I'm still looking for a way to incrementally allocate memory of a two-dimensional array in C/C++.
To answer your original question
// How to add fv to fmat?
When you use float **fmat you are declaring a pointer to [an array of] pointers. Therefore you have to allocate (and free!) that array before you can use it. Think of it as the row pointer holder:
float **fmat = new float*[N];
Then in your loop you simply do
fmat[i] = fv;
However I suggest you look at the std::vector approach since it won't be significantly slower and will spare you from all those new and delete.
better - use boost::MultiArray as in the top answer here :
How do I best handle dynamic multi-dimensional arrays in C/C++?
trying to dynamically allocate your own matrix type is pain you do not need.
Alternatively - as a low-tech, quick and dirty solution, use a vector of vectors, like this :
C++ vector of vectors
If you want to do this without fancy data structures, you should declare fmat as an array of size N of pointers. For each column, you'll probably have to just guess at a reasonable size to start with. Dynamically allocate an array of that size of floats, and set the appropriate element of fmat to point at that array. If you run out of space (as in, there are more floats to be added to that column), try allocating a new array of twice the previous size. Change the appropriate element of fmat to point to the new array and deallocate the old one.
This technique is a bit ugly and can cause many allocations/deallocations if your predictions aren't good, but I've used it before. If you need dynamic array expansion without using someone else's data structures, this is about as good as you can get.
To elaborate the std::vector approach, this is how it would look like:
// initialize
N = getFeatureVectorSize();
vector<vector<float>> fmat(N);
Now the loop looks the same, you access the rows by saying fmat[i], however there is no pointer to a float. You simply call fmat[i].resize(row_len) to set the size and then assign to it using fmat[i][z] = 1.23.
In your solution I suggest you make getObjectFeatureVector return a vector<float>, so you can just say fmat[i] = getObjectFeatureVector();. Thanks to the C++11 move constructors this will be just as fast as assigning the pointers. Also this solution will solve the problem of getObjectFeatureVector not knowing the size of the array.
Edit: As I understand you don't know the number of columns. No problem:
deque<vector<float>> fmat();
Given this function:
std::vector<float> getObjectFeatureVector();
This is how you add another column:
fmat.push_back(getObjectFeatureVector());
The number of columns is fmat.size() and the number of rows in a column is fmat[i].size().