Get the number of elements in a struct array c++ - c++

I need to find the number of elements in a struct array
I have this struct
struct Features {
int feature1;
string feature2;
string feature3;
string feature4;
bool feature5;
};
I have then turned it into an array
Features *feature = new Features[100];
I have then entered some values
for(int i = 0; i < 3; i++)
{
feature[i].feature1 = 5;
feature[i].feature2 = "test";
feature[i].feature3 = "test2";
feature[i].feature4 = "test3";
feature[i].feature5 = true;
}
now I want to get the size of the array, which should be 3 (2)
How do I do this?
cout << (sizeof feature / sizeof *feature) << endl;
doesn't seem to work as it prints the incorrect value. (It keep printing 4)
Sorry if this is a stupid question, I am still learning c++

cout << (sizeof feature / sizeof *feature) << endl;
Should be
cout << (sizeof(feature) / sizeof(*feature)) << endl;
Note the brackets. Sadly it cannot tell you what you want for a couple of reasons.
feature is a pointer.
A pointer is a location in storage, an address, and all addresses on any system you're likely to encounter system will be the same size and probably 4 or 8 bytes. Let's assume 4 for now and sub it into the equation.
cout << (4 / sizeof(*feature)) << endl;
This will certainly print 0 because *feature is definitely larger than 4 and in integer math 4 / <anything greater than 4> will be truncated to 0.
If feature was defined
Features feature[100];
And unless the size of the data block being pointed to is required to change there is no reason why it shouldn't be. Anyway, now feature is more than just a pointer to some arbitrary block of memory. It is a block of exactly 100 Features. It has size of 100 * sizeof(feature[0]). This is a fundamental difference between an array and a pointer, so next time someone tells you "Arrays are pointers!" you can tell them to go expletive deleted themselves.
For example:
cout << (sizeof(feature) / sizeof(feature[0])) << endl;
will print 100, not the 0 we got back when feature was a pointer. 0 != 100. Array is not pointer. Array can be used like pointer in a lot of circumstances.
Feature feature2d[100][100];
Feature ** feature2dptr = feature2d;
is not one of them. Remember this when you have to pass a 2D array into a function.
An array knows its size but nothing about how much is being used.
From the size we can compute capacity as we did above, but to be frank this is a sucker bet. feature could be defined
constexpr int MAX_FEATURES = 100;
Features feature[MAX_FEATURES];
And then rather than this:
cout << (sizeof(feature) / sizeof(feature[0])) << endl;
we print the much less convoluted
cout << MAX_FEATURES << endl;
But this is still not what we want.
So how do we do this right?
The preferred C++ solution is to use std::vector. vector does all sorts of cool things for you like resize itself and keep a count on how much of it is actually used. Plus it is Rule of Three compliant unlike the typical pointer-and-dynamic-array approach. What is The Rule of Three? Well it's really important. I recommend reading the link.
To define a vector of Features
std::vector<Features> feature;
To store a Feature
Feature temp;
feature.push_back(temp);
Often a better route is to define a constructor for Feature and
feature.emplace_back(feature1, feature2, feature3, feature4, feature5);
because this eliminates the need to create and copy a temporary Feature.
To get the number of Features in feature
feature.size();
Simple, huh?
OK. So some folk think you shouldn't use vector until you're older and more experienced. They want you to suffer through the pit falls of memory management while you are still learning to write a decent, well-structured program and figuring out how to debug the trivial mistakes that new programmers make. I'm not down with this, but it seems to be the educational paradigm that rules the land.
Let's start with the fixed array because it is simple and much less complicated.
constexpr int MAX_FEATURES = 100;
Features feature[MAX_FEATURES];
int num_features = 0;
Every time you need to add a Feature to the array, you first make sure you have room.
if(num_features < MAX_FEATURES)
Then add the Feature
feature[num_features] = new_feature;
and then increment, add one to, num_features.
num_features++;
How many Features do you have?
cout << num_features << endl;
If you absolutely must do this with pointers
int capacity = 100;
Features * feature = new Feature[capacity];
int num_features = 0;
Now you have to maintain capacity and num_features because the only reason you would do something this stupid is to be able to resize the memory block feature points to as required.
if(num_features >= MAX_FEATURES)
{
Make a bigger feature
capacity = capacity * 1.5; // why 1.5? Because I felt like it.
Features * bigger_feature = new Features[capacity];
Copy everything from feature to bigger_feature
for (int index = 0; index < num_features; index++
{
bigger_feature[index] = feature[index];
}
Free the memory used by feature
delete[] feature;
Replace feature with bigger_feature
feature = bigger_feature;
}
Now you can
feature[num_features] = new_feature;
num_features++;
Here's that blob again in a nice cut-and-pastable blob:
if(num_features == MAX_FEATURES)
{
capacity = capacity * 1.5; // why 1.5? Because I felt like it.
Features * bigger_feature = new Features[capacity];
for (int index = 0; index < num_features; index++
{
bigger_feature[index] = feature[index];
}
delete[] feature;
feature = bigger_feature;
}
feature[num_features] = new_feature;
num_features++;
Blah. And this pointer mishmash is most definitely not Rule of Three compliant so you likely have to write copy and move constructors, assignment and move operators, and a destructor.
At the end when you are done you must
delete[] feature;
How many Features do you have?
cout << num_features << endl;

No, actually the size of the array is 100, not 3, because you allocated a 100-element array with new. The fact that you initialized 3 elements in the array doesn't matter. It's still is a 100 element array.
But whether it's 3, or 100, it doesn't matter. It's up to you to keep track of the size of the allocated array. C++ doesn't do it for you.
But if you do want for C++ to keep track of the size of the array, use std::vector. That's what it's there for.

You need to keep track of it. When you allocated enough space for 100 Features and you got exactly that... enough space for 100 Features. Keeping track of how many of them you've subsequently initialized etc is something you need to do.

I'm sure it's printing the correct value as programmed. But sizeof only tells you how much space is allocated, not how many members contain meaningful values.
If you want a variable size array, use std::vector. Otherwise, keep the setup you have, but initialize a member (such as feature1) to a recognizable value (such as -999 or something else you won't expect to use meaningfully) and then see how far you can loop before finding that value;

Related

luaT_pushudata returns proper Tensor type and dimension, but garbage data

I have a short clip of C++ code that should theoretically work to create and return a torch.IntTensor object, but when I call it from Torch I get garbage data.
Here is my code (note this snippet leaves out the function registering, but suffice it to say that it registers fine--I can provide it if necessary):
static int ltest(lua_State* L)
{
std::vector<int> matches;
for (int i = 0; i < 10; i++)
{
matches.push_back(i);
}
performMatching(dist, matches, ratio_threshold);
THIntStorage* storage = THIntStorage_newWithData(&matches[0], matches.size());
THIntTensor* tensorMatches = THIntTensor_newWithStorage1d(storage, 0, matches.size(), 1);
// Push result to Lua stack
luaT_pushudata(L, (void*)tensorMatches, "torch.IntTensor");
return 1;
}
When I call this from Lua, I should get a [torch.IntTensor of size 10] and I do. However, the data appears to be either memory addresses or junk:
29677072
0
16712197
3
0
0
29677328
0
4387616
0
[torch.IntTensor of size 10]
It should have been the numbers [0,9].
Where am I going wrong?
For the record, when I test it in C++
for (int i = 0; i < storage->size; i++)
std::cout << *(storage->data+i) << std::endl;
prints the proper values.
As does
for (int i = 0; i < tensorMatches->storage->size; i++)
std::cout << *(tensorMatches->storage->data+i) << std::endl;
so it seems clear to me that the problem lies in the exchange between C++ and Lua.
So I got an answer elsewhere--the Google group for Torch7--but I'll copy and paste it here for anyone who may need it.
From user #alban desmaison:
Your problem is actually memory management.
When your C++ function return, you vector<int> is free, and so is its content.
From that point onward, the tensor is pointing to free memory and when you access it, you access freed memory.
You will have to either:
Allocate memory on the heap with malloc (as an array of ints) and use THIntStorage_newWithData as you currently do (the pointer that you give to newWithData will be freeed when it is not used anymore by Torch).
Use a vector<int> the way you currently do but create a new Tensor with a given size with THIntTensor_newWithSize1d(matches.size()) and then copy the content of the vector into the tensor.
For the record, I couldn't get it to work with malloc but the copying memory approach worked just fine.

Storing and accessing a pointer to an array in a std::map

The scenario is:
I am writing a framework for a particle simulation application.
I need to add various attributes to the particles, which I don't know yet and which are different per Particle. Since an attribute would be accessed and manipulate quite often in a time critical manner, I decided to store them in a plain c array.
I would prefer to access the different kind of attributes (like pos, vel, force, etc.) by name.
So I decided to use a std::map<const std::string,double*> to store the floating point attributes.
So to my question. I try to store particle attribute values as following
double* attr = new double[3 * 10];
std::map<std::string,double*> doubleAttributes3d;
doubleAttributes3d.insert(std::make_pair("pos", attr));
for(int64_t i = 0;i<10;++i)
{
*(doubleAttributes3d["pos"] + 3 * i) = 1.0;
*(doubleAttributes3d["pos"] + 3 * i + 1) = 2.0;
*(doubleAttributes3d["pos"] + 3 * i + 2) = 3.0;
}
double * ptr = doubleAttributes3d["pos"];
for(int64_t i = 0;i<10;++i)
{
cout << *(ptr + 3 * i) << " ";
cout << *(ptr + 3 * i + 1) << " ";
cout << *(ptr + 3 * i + 2) << endl;
}
Which causes a segfault.
In particular, everytime when I try to access elements of the array.
Is there a possibility this can ever work?
(I had never the need of using a map before, maybe I simply produce an error due to syntax mistakes...) Or in other words why can't I access the memory address which I stored into the map?
Would there be another/better(/actually working) way of storing an unknown number of arrays and give them a "name" in order to access and manipulate them later?
I know there were similar questions asked around here but none of the answers worked out for me.
First question's awnser:
I tried to run it and it run without issues, producing the expected results. I used valgrind on it and it reported no undefined behaviour. Your error must be elsewhere or it was fixed when you changed the bounds of the for cycle.
Second question's awnser:
How many variables you have there? std::map is useful only if there is a lot of variables and most of them are unset for most particles.
In most cases, it's much more comfortable and far faster to index the properties with an enum, that can be used to address a standard c array. For example:
enum pp {
POS,
VEL,
FORCE,
CHARGE,
MASS,
ELECTRON_CONFIGURATION,
ppMax
}
double* particles[1000][ppMax];
particles[0][POS] = new new double[3 * 10];
// et cetera
I benchmarked a similar issue and the difference in speed was significant, even if I had to set all unused pointers to nullptr. I don't think that the additional spacial complexity matters much in this case.

Large vector "Segmentation fault" error

I have gathered a large amount of extremely useful information from other peoples' questions and answers on SO, and have searched duly for an answer to this one as well. Unfortunately I have not found a solution to this problem.
The following function to generate a list of primes:
void genPrimes (std::vector<int>* primesPtr, int upperBound = 10)
{
std::ofstream log;
log.open("log.txt");
std::vector<int>& primesRef = *primesPtr;
// Populate primes with non-neg reals
for (int i = 2; i <= upperBound; i++)
primesRef.push_back(i);
log << "Generated reals successfully." << std::endl;
log << primesRef.size() << std::endl;
// Eratosthenes sieve to remove non-primes
for (int i = 0; i < primesRef.size(); i++) {
if (primesRef[i] == 0) continue;
int jumpStart = primesRef[i];
for (int jump = jumpStart; jump < primesRef.size(); jump += jumpStart) {
if (primesRef[i+jump] == 0) continue;
primesRef[i+jump] = 0;
}
}
log << "Executed Eratosthenes Sieve successfully.\n";
for (int i = 0; i < primesRef.size(); i++) {
if (primesRef[i] == 0) {
primesRef.erase(primesRef.begin() + i);
i--;
}
}
log << "Cleaned list.\n";
log.close();
}
is called by:
const int SIZE = 500;
std::vector<int>* primes = new std::vector<int>[SIZE];
genPrimes(primes, SIZE);
This code works well. However, when I change the value of SIZE to a larger number (say, 500000), the compiler returns a "segmentation error." I'm not familiar enough with vectors to understand the problem. Any help is much appreciated.
You are accessing primesRef[i + jump] where i could be primesRef.size() - 1 and jump could be primesRef.size() - 1, leading to an out of bounds access.
It is happening with a 500 limit, it is just that you happen to not have any bad side effects from the out of bound access at the moment.
Also note that using a vector here is a bad choice as every erase will have to move all of the following entries in memory.
Are you sure you wanted to do
new std::vector<int> [500];
and not
new std::vector<int> (500);
In the latter case, you are specifying the size of the vector, whose location is available to you via the variable named 'primes'.
In the former, you are requesting space for 500 vectors, each sized to the default that the STL library wants.
That would be something like (on my system : 24*500 bytes). In the latter case, 500 length vector(only one vector) is what you are asking for.
EDIT: look at the usage - he needs just one vector.
std::vector& primesRef = *primesPtr;
The problem lies here:
// Populate primes with non-neg reals
for (int i = 2; i <= upperBound; i++)
primesRef.push_back(i);
You only have N-2 elements in your vector pushed back, but then try to access an element at N-1 (i+jump). The fact that it did not fail on 500 is just dumb luck that the memory being overwritten was not catastrophic.
This code works well. However, when I change the value of SIZE to a larger number (say, 500000), ...
That may blow your stack, and be to big allocated with it. You need dynamic memory allocation for all of the std::vector<int> instances you believe to need.
To achieve that, simply use a nested std::vetcor like this.
std::vector<std::vector<int>> primes(SIZE);
instead.
But to get straight on, I seriously doubt you need number of SIZE vector instances to store all of the prime numbers found, but just a single one initialized like this:
std::vector<int> primes(SIZE);

Memoization Recursion C++

I was implementing a recursive function with memoization for speed ups. The point of the program is as follows:
I shuffle a deck of cards (with an equal number of red and black
cards) and start dealing them face up.
After any card you can say “stop”, at which point I pay you $1 for
every red card dealt and you pay me $1 for every black card dealt.
What is your optimal strategy, and how much would you pay to play
this game?
My recursive function is as follows:
double Game::Value_of_game(double number_of_red_cards, double number_of_black_cards)
{
double value, key;
if(number_of_red_cards == 0)
{
Card_values.insert(Card_values.begin(), pair<double, double> (Key_hash_table(number_of_red_cards, number_of_black_cards), number_of_black_cards));
return number_of_black_cards;
}
else if(number_of_black_cards == 0)
{
Card_values.insert(Card_values.begin(), pair<double, double> (Key_hash_table(number_of_red_cards, number_of_black_cards), 0));
return 0;
}
card_iter = Card_values.find(Key_hash_table(number_of_red_cards, number_of_black_cards));
if(card_iter != Card_values.end())
{
cout << endl << "Debug: [" << number_of_red_cards << ", " << number_of_black_cards << "] and value = " << card_iter->second << endl;
return card_iter->second;
}
else
{
number_of_total_cards = number_of_red_cards + number_of_black_cards;
prob_red_card = number_of_red_cards/number_of_total_cards;
prob_black_card = number_of_black_cards/number_of_total_cards;
value = max(((prob_red_card*Value_of_game(number_of_red_cards - 1, number_of_black_cards)) +
(prob_black_card*Value_of_game(number_of_red_cards, number_of_black_cards - 1))),
(number_of_black_cards - number_of_red_cards));
cout << "Check: value = " << value << endl;
Card_values.insert(Card_values.begin(), pair<double, double> (Key_hash_table(number_of_red_cards, number_of_black_cards), value));
card_iter = Card_values.find(Key_hash_table(number_of_red_cards , number_of_black_cards ));
if(card_iter != Card_values.end());
return card_iter->second;
}
}
double Game::Key_hash_table(double number_of_red_cards, double number_of_black_cards)
{
double key = number_of_red_cards + (number_of_black_cards*91);
return key;
}
The third if statement is the "memoization" part of the code, it stores all the necessary values. The values that are kept in the map can be thought of as a matrix, these values will correspond to a certain #red cards and #black cards. What is really werid is that when I execute the code for 8 cards in total (4 blacks and 4 reds), I get an incorrect answer. But when I execute the code for 10 cards, my answer is wrong, but now my answer for 4 blacks and 4 reds are correct (8 cards)! Same can be said for 12 cards, where I get the wrong answer for 12 cards, but the correct answer for 10 cards, so on and so forth. There is some bug in the code, however, I can't figure it out.
Nobody actually answered this question with an answer. So I will give it a try, though nneonneo actually put his or her finger on the likely source of your problem.
The first problem that's probably not actually a problem in this case, but sticks out like a sore thumb... you are using double to hold a value that you mostly treat as an integer. In this case, on most systems, this is probably OK. But as a general practice, it is very bad. In particular because you check if a double is exactly equal to 0. It probably will be as, on most systems, with most compilers, a double can hold integers values up to a fairly large size with perfect precision as long as you restrict yourself to adding, subtracting and multiplying by other integers or doubles masquerading as integers to get a new value.
But, that's likely not the source of the error you're seeing, it's just trips every good programmer's alarm bells for smelly code. It should be fixed. The only time you really need them to be doubles is when you're calculating the relative probability of red or black.
And that brings me to the thing that probably is your problem. You have these two statements in your code:
number_of_total_cards = number_of_red_cards + number_of_black_cards;
prob_red_card = number_of_red_cards/number_of_total_cards;
prob_black_card = number_of_black_cards/number_of_total_cards;
which, of course, should read:
number_of_total_cards = number_of_red_cards + number_of_black_cards;
prob_red_card = number_of_red_cards/double(number_of_total_cards);
prob_black_card = number_of_black_cards/double(number_of_total_cards);
because you've been a good programmer and declared those variables as integers.
Presumably prob_red_card and prob_black_card are variables of type double. But they are not declared anywhere in the code you show us. This means that no matter where they are declared, or what their types are, they must be effectively shared by all sub-calls in the recursive call tree for Game::Value_of_game.
The is almost certainly not what you want. It makes it extremely difficult to reason about what values those variables have and what those values represent during any given call in the recursive call tree for your function. They really have to be local variables in order for the algorithm to be tractable to analyze. Luckily, they seem to only be used within the else clause of a particular if statement. So they can be declared when they are initially assigned values. Here is probably what this code should read:
unsigned const int number_of_total_cards = number_of_red_cards + number_of_black_cards;
const double prob_red_card = number_of_red_cards/double(number_of_total_cards);
const double prob_black_card = number_of_black_cards/double(number_of_total_cards);
Note that I also declare them const. It is good practice to declare any variable who's value you don't expect to change during the lifetime of the variable as const. It helps you write code that is more correct by asking the compiler to tell you when you accidentally write code that is incorrect. It also can help the compiler generate better code, though in this case even a trivial analysis of the code reveals that they are not modified during their lifetimes and can be treated as const, so most decent optimizers will essentially put the const in for you for the purposes of code optimization, though that still will not give you the benefit of having the compiler tell you if you accidentally use them in a non-const way.

Typecasting an array of structs (of pointers?) to be sent through winsock

I'm still getting my head around pointers and the like, so here goes...
Each client sends the position of the player correctly to the server.
On the server the data is then put into an array of structs (or points to structs?). The data of that array has also been verified to be correct.
The server then is meant to send all the data in that array back to each of the players
I would like to be able to send arrays from the server to the client (and back again) as the it is that approach I want to take in my head that I understand sort of (or arrays of pointers to structs?) eg, arrays of bullets and shooting, or other things.
I'm not worried about bandwidth at the moment or optimal code, that comes later when I understand it all better :)
Basically I've created a struct for my player:-
struct PlayerShip
{
unsigned int health;
unsigned int X;
unsigned int Y;
};
Then I've created an array (from the guidance of friends) that allows me to access the data of those structs (and typecast them as needed) (I think)
PlayerShip *playerArray[serverMaxClients];
for (int i = 0; i < serverMaxClients; i++)
{
playerArray[i] = new PlayerShip;
ZeroMemory(playerArray[i],sizeof(PlayerShip));
}
I recv data from all the connected players and feed it into the array
for (int slotIndex = 0; slotIndex < serverMaxClients; slotIndex++)
{
char szIncoming[1500];
ZeroMemory(szIncoming,1500);
int connectionStatus = recv(clientSocketArray[slotIndex], (char*)szIncoming,sizeof(szIncoming),0);
playerDataTemp = (PlayerShip*)szIncoming;
playerArray[slotIndex]->X = playerDataTemp->X;
playerArray[slotIndex]->Y = playerDataTemp->Y;
}
I print out the array and all the data is correct.
So the next step is to send that data back to the client.
I tried the following and a few variations of it, but I either get compile errors as I try to change variables into references and/or pointers (I still haven't had that epiphany moment where pointers and references suddenly makes sense), or the value comes out incorrectly. (the below case currently outputs an incorrect value)
for (int i = 0; i < serverMaxClients; i++)
{
char* outgoing = (char*)playerArray;
if (clientSlotTaken[i] == true)
{
send(clientSocketArray[i],outgoing,sizeof(playerArray),0);
}
int *valueCheck;
valueCheck = (int*)outgoing;
cout << "VALUE CHECK " << valueCheck << "\n";
delete outgoing;
}
The "Value Check" I'm expecting to be "100" as that should be player 1's health of 100 that was sent to the server earlier.
UPDATE
Okie now I'm starting to get my head around it a bit more.
playerArray is an array of pointers to structs. So I don't want to send the raw data from the array to the clients. I want to send the data of the structs to the players.
So I'm guessing I have to have a bit of code that creates a char array, which I populate with the data from all the player structs.
I tried the following but...
char outgoing[120];
PlayerShip *dataPopulator;
dataPopulator = &outgoing[0]; //Start at the begining of the array
for (int i=0; i < serverMaxClients; i++)
{
*dataPopulator = playerArray[i];
dataPopulator++;
}
I get the following errors
cannot convert 'char*' to 'PlayerShip*' in assignment|
no match for 'operator=' in '* dataPopulator = playerArray[i]'|
Thanks to Joriki, that help me understand it a bit more, but I still have a way to go :\ Still reading through lots of web pages that try to explain pointers and such
PlayerShip *playerArray[serverMaxClients];
declares an array of serverMaxClients pointers, each of which points to a PlayerShip structure.
Here
char* outgoing = (char*)playerArray;
you're referring to the address of this array of pointers, so your call to send will send a bunch of pointers, which is almost certainly not what you want. Also, here
delete outgoing;
you're trying to delete this array of pointers, which was allocated on the stack and not from the heap, so this might cause major problems; also, you're deleting the same pointer in every iteration.
I think the following comes closer to what you're intending to do:
for (int i = 0; i < serverMaxClients; i++)
{
char* outgoing = (char*)playerArray [i];
if (clientSlotTaken[i] == true)
{
send(clientSocketArray[i],outgoing,sizeof(PlayerShip),0);
}
int *valueCheck;
valueCheck = (int*)outgoing;
cout << "VALUE CHECK " << *valueCheck << "\n";
delete outgoing;
}
This sends the data in the PlayerShip structures, as opposed to just machine-dependent pointers to it, and it frees the memory allocated on the heap by "new PlayerShip", as opposed to the array of pointers to it allocated on the stack. Note also the added asterisk in the output statement for the value check; you were outputting a pointer instead of the value being pointed to. (Even if you'd added the asterisk, you would have just gotten an int cast of the first pointer in the array, rather than the health value in the PlayerShip structure it points to.)
I hope that helps; feel free to ask further questions in the comments if I haven't made it clear.
Update in response to ChiggenWingz' comments and udpate:
If you want to send all the data to each client, I see three options:
1) You can replace the send in my code with a loop:
if (clientSlotTaken[i] == true)
{
for (int j = 0;j < serverMaxClients;j++)
send(clientSocketArray[i],(char*)playerArray [j],sizeof(PlayerShip),0);
}
But I assume you're trying to avoid that since it may make the I/O less efficient.
2) You can do what you tried to do in your update. To resolve the errors you listed and make it work:
a) You need to cast the char* to a PlayerShip*, just like you had to cast the other way around in your earlier code.
b) In the copy assignment, you have a PlayerShip on the left, but a PlayerShip* on the right -- you need to dereference that pointer.
c) Using a fixed length like 120 is pretty dangerous; if you have more clients later, this could overflow; the size should be serverMaxClients * sizeof (PlayerShip).
d) "&outgoing[0]" is synonymous with "outgoing".
Putting that all together:
char outgoing[serverMaxClients * sizeof (PlayerShip)];
PlayerShip *dataPopulator;
dataPopulator = (PlayerShip*) outgoing; //Start at the begining of the array
for (int i=0; i < serverMaxClients; i++)
{
*dataPopulator = *playerArray[i];
dataPopulator++;
}
3) The best option I think would be to allocate all the PlayerShips together in one contiguous array instead of separately. You could do that either on the stack or on the heap, as you prefer:
a) On the heap:
PlayerShip *playerShips = new PlayerShip [serverMaxClients];
ZeroMemory (playerShips,serverMaxClients * sizeof(PlayerShip));
b) On the stack:
PlayerShip playerShips [serverMaxClients];
ZeroMemory (playerShips,sizeof(playerShips));
(The ZeroMemory call in a) would also work in b), but not the other way around.)
Now, independent of how you allocated the contiguous array, you can write the entire data to each client like this:
for (int i = 0; i < serverMaxClients; i++)
if (clientSlotTaken[i] == true)
send(clientSocketArray[i],(char *) playerShips,serverMaxClients * sizeof(PlayerShip),0);
(Again, in case b) you could replace the size calculation by sizeof(playerShips).
I hope that clarifies things further -- but feel free to ask more general questions about pointers if you're still confused :-)