I have an array of character pointers which I want to send to device. Can somebody tell me how?
Here is what I have tried so far:
char **a;
char **b;
*a[0]="Foo1";
*a[1]=="Foo2";
cudaMalloc(void**)?,sizeof(?);
cudamemcpy(b,a,sizeof(?),cudaMemcpyHostToDevice);
How do I pass in the parameters to the above two functions?
And finally how should the kernel be called? (Do I just pass b or *b or something?)
If you send the character pointers to the device, you will have an array of CPU memory addresses on the device, which is probably not what you want.
If you want to send the whole data structure there, allocate sizeof(char) * string_length bytes for each string, and then store the resulting device pointers in a CPU array of char*s. Then, once it's complete, send the array of device pointers to the device, allocating sizeof(char*) * number_of_strings bytes for it.
When you call the kernel, give it the device-side array of device pointers.
to assign, use array[0] = "string literal"
No need for stars.
To get length, use strlen(). siezeof is irrelevant.
Never copy into this string matrix, or pass it as out parameter.
You have to allocate memory for that.
Related
I am looking for something which give me size which taken by str character pointer.
int main()
{
char * str = (char *) malloc(sizeof(char) * 100);
int size = 0;
size = /* library function or anything use to find size */
printf("Total size of str array - %d\n", size);
}
I want prove that give memory is 100 bytes.
Is any one have any idea about this ?
A raw pointer only knows it points to a single element of it's type. If that thing it points to happens to be part of an array, the pointer doesn't know and there's no way to get that information from it.
You want to instead use types that do know their size, like for example; std::string, std::array or std::vector.
The C and C++ standards do not provide a way to get, from an address, the amount of memory that was requested in the call to malloc that returned that address.
Some C or C++ implementations provide a way to get the amount of memory that was provided at the given address, such as malloc_size. The amount provided may be greater than the amount that was requested.
If the memory contains a string, which is an array of characters terminated by a null character, then you can determine the length of the string by counting characters up to the null character. This function is provided by the standard strlen function. This length is different from the space allocated unless, of course, the string happens to fill the space.
There is no (good, standard, portable) way to tell from a pointer value alone whether it's the first element of an array or not, nor how many elements follow it. That information has to be tracked separately.
If you're writing in C++, don't do your own memory management if you can help it. Use a standard container type like std::vector or std::map (or std::string for text). If you must do your own memory management, use the new and delete operators instead of the *alloc and free library functions, and wrap a class around those operations that also keeps track of how many elements have been allocated (which, like std::vector and std::map, is returned via a read-only size() method).
My task is to measure time of communication betweeen two processes.
I want to send 4,8,...,1000,...., 10000 bytes of data and measure time it takes to send and receive back the message.
So i figured out that i will send an array of shorts.
When i send array initialised like that:
mpi::communicator world;
short message[100000];
....
world.send(1,0, message);
time seems to be ok, and I can see a time difference between message[100000] and [1000]
But I want to allocate array dynamically like that:
short *message = new short[100000];
...
world.send(1,0, *message);
It seems like the second send is always sending the same amount of data no matter what size the array will be.
So my question is, how to send a dynamically allocated array?
In the second case message is of type short * and *message dereferences to a scalar short, i.e. to the first element of the array only. Use
world.send(1, 0, message, n);
instead and vary the value of n. It should also (probably) work if you cast the pointer to a pointer to an array and then dereference it:
world.send(1, 0, *reinterpret_cast<int(*)[100]>(message));
The int(*)[100] type is a pointer to an integer array with 100 elements.
I have a function that have a char* as parameter to receive some informations from inside the function. (int foo(char* param1))
How can I be sure that this parameter have the enough allocated space to receive all the information I need to put in?
I can be sure if it is, or not, a valid pointer, but I haven't found a way to be sure about the size/length allocated to the parameter.
I can't change the function (can't add another parameter with the size).
AFIAK, C++ does not have any facility to verify the amount of space allocated to a pointer. If the input points to a NULL-terminated array of chracters (i.e. a c-string), then you can use strlen(). Typically these kinds of functions in C and C++ must be well-documented as to what is expected from the parameters. The function is typically implemented assuming the the caller honors the documented contract.
If I have understood the question correctly, there is no way for you to ascertain the size of valid memory associated with the pointer. If this was pointing to an array of data, the usual way to pass a size parameter but if you do not have that option, you do not know what you will be accessing
Easy answer: you can't.
Little more complicated C-style answer: if that array of chars has a terminating NUL (0) byte, you can use strlen().
OS-specific answer: if the memory for the array was obtained using malloc(), you can use malloc_size() and malloc_usable_size() at least on OS X and Linux, respectively. On windows, for applications that use the Microsoft C Runtime, there's the _msize() function.
You can't be sure. Not really. The only practical check you can do for pointer validity is to check if it is not NULL.
As far as knowing size of the the buffer param1 points to, the only thing that comes to mind is this stupid hack. Before callting function, put the size of the buffer in at the beginning of the buffer that param1 points to. Then, copy your data into the buffer, overwriting the size when you are done with checks.
Like this:
*(unsigned int*)param1 = buf_size;
foo(param1);
int foo(char* param1)
{
if (0 == param1)
{
// fail
}
unsigned int buf_size = *(unsigned int*)param1;
if (buf_size < whateverlimit)
{
// fail
}
// copy data into the buffer
}
I have not compiled this code so it might need some corrections.
I'm new to pointers in C++. I'm not sure why I need pointers like char * something[20] as oppose to just char something[20][100]. I realize that the second method would mean that 100 block of memory will be allocated for each element in the array, but wouldn't the first method introduce memory leak issues.
If someone could explain to me how char * something[20] locates memory, that would be great.
Edit:
My C++ Primer Plus book is doing:
const char * cities[5] = {
"City 1",
"City 2",
"City 3",
"City 4",
"City 5"
}
Isn't this the opposite of what people just said?
You allocate 20 pointers in the memory, then you will need to go through each and every one of them to allocate memory dynamically:
something[0] = new char[100];
something[1] = new char[20]; // they can differ in size
And delete them all separately:
delete [] something[0];
delete [] something[1];
EDIT:
const char* text[] = {"These", "are", "string", "literals"};
Strings specified directly in the source code ("string literals", which are always const char *) are quite different to char *, mainly because you don't have to worry about alloc/dealloc of them. They are also generally handled very different in memory, but this depends on the implementation of your compiler.
You're right.
You'd need to go through each element of that array and allocate a character buffer for each one.
Then, later, you'd need to go through each element of that array and free the memory again.
Why you would want to faff about with this in C++ is anyone's guess.
What's wrong with std::vector<std::string> myStrings(20)?
It will allocate space for twenty char-pointers.
They will not be initialized, so typical usage looks like
char * something[20];
for (int i=0; i<20; i++)
something[i] = strdup("something of a content");
and later
for (int i=0; i<20; i++)
if (something[i])
free(something[i]);
You're right - the first method may introduce memory leak issues and the overhead of doing dynamic allocations, plus more reads. I think the second method is usually preferable, unless it wastes too much RAM or you may need the strings to grow longer than 99 chars.
How the first method works:
char* something[20]; // Stores 20 pointers.
something[0] = malloc(100); // Make something[0] point to a new buffer of 100 bytes.
sprintf(something[0], "hai"); // Make the new buffer contain "hai", going through the pointer in something[0]
free(something[0]); // Release the buffer.
char* smth[20] does not allocate any memeory on heap. It allocates just enough space on the stack to store 20 pointers. The value of those pointers is undefined, so before using them, you have to initialize them, like this:
char* smth[20];
smth[0] = new char[100]; // allocate memory for 100 chars, store the address of the first one in smth[0]
//..some code..
delete[] smth[0];
First of all, this almost inapplicable in C++. The normal equivalent in C++ would be something like: std::vector<std::string> something;
In C, the primary difference is that you can allocate each string separately from the others. With char something[M][N], you always allocate exactly the same number of strings, and the same space for each string. This will frequently waste space (when the strings are shorter than you've made space for), and won't allow you to deal with any more strings or longer of strings than you've made space for initially.
char *something[20] let's you deal with longer/shorter strings more efficiently, but still only makes space for 20 strings.
The next step (if you're feeling adventurous) is to use something like:
char **something;
and allocate the strings individually, and allocate space for the pointers dynamically as well, so if you get more than 20 strings you can deal with that as well.
I'll repeat, however, that for most practical purposes, this is restricted to C. In C++, the standard library already has data structures for situations like these.
C++ has pointers because C has pointers.
Why do we use pointers?
To track dynamically-allocated memory. The memory allocation functions in C (malloc, calloc, realloc) and the new operator in C++ all return pointer values.
To mimic pass-by-reference semantics (C only). In C, all function arguments are passed by value; the formal parameter and the actual parameter are distinct objects, and modifying a formal parameter doesn't affect the actual parameter. We get around this by passing pointers to the function. C++ introduced reference types, which serve the same purpose, but are a bit cleaner and safer than using pointers.
To build dynamic, self-referential data structures. A struct cannot contain an instance of itself, but it can contain a pointer to an instance. For example, the following code
struct node
{
data_t data;
struct node *next;
};
creates a data type for a simple linked-list node; the next member explicitly points to the next element in the list. Note that in C++, the STL containers for stacks and queues and vectors all use pointers under the hood, isolating you from the bookkeeping.
There are literally dozens of other places where pointers come up, but those are the main reasons you use them.
Your array of pointers could be used to store strings of varying length by allocating just enough memory for each, rather than relying on some maximum size (which will eventually be exceeded, leading to a buffer overflow error, and in any case will lead to internal memory fragmentation). Naturally, in C++ you'd use the string data type (which hides all the pointer and memory management behind the class API) instead of pointers to char, but someone has decided to confuse you by starting with low-level details instead of the big picture.
I'm not
sure why I need pointers like char *
something[20] as oppose to just char
something[20][100]. I realize that the
second method would mean that 100
block of memory will be allocated for
each element in the array, but
wouldn't the first method introduce
memory leak issues.
The second method will suffice if you're only referencing your buffer(s) locally.
The problem comes when you pass the array name to another function. When you pass char something[10] to another function, you're actually passing char* something because the array length doesn't go along for the ride.
For multidimensional arrays, you can declare a function that takes in an array of determinate length in all but one direction, e.g. foo(char* something[10]).
So why use the first form rather than the second? I can think of a few reasons:
You don't want to have the restriction that the entire buffer must reside in continuous memory.
You don't know at compile-time that you'll need each buffer, or that the length of each buffer will need to be the same size, and you want the flexibility to determine that at run-time.
This is a function declaration.
char * something[20]
Assuming this is 32Bit, this allocates 80 bytes of data on the stack.
4 Bytes for each pointer address, 20 pointers total = 4 x 20 = 80 bytes.
The pointers are all uninitialized, so you need to write additional code to allocate/free
the buffers for doing this.
It roughly looks like:
[0] [4 Bytes of Uninitialized data to hold a pointer/memory address...]
[1] [4 Bytes of ... ]
...
[19]
char something[20][100]
Allocates 2000 bytes on the stack.
100 Bytes for each something, 20 somethings total = 100 x 20 = 2000 bytes.
[0] [100 bytes to hold characters]
[1] [100 bytes to hold characters]
...
[19]
The char *, has a smaller memory overhead, but you have to manage the memory.
The char[][] approach, has bigger memory overhead, but you don't have additional memory management.
With either approach, you have to be careful when writing to the buffer allocated not to exceed/overwrite the memory alloc'd for it.
I need to be able to set the size of an array based on the number of bytes in a file.
For example, I want to do this:
// Obtain the file size.
fseek (fp, 0, SEEK_END);
size_t file_size = ftell(fp);
rewind(fp);
// Create the buffer to hold the file contents.
char buff[file_size];
However, I get a compile time error saying that the size of the buffer has to be a constant.
How can I accomplish this?
Use a vector.
std::vector<char> buff(file_size);
The entire vector is filled with '\0' first, automatically. But the performance "lost" might not be noticable. It's certainly safer and more comfortable. Then access it like a usual array. You may even pass the pointer to the data to legacy C functions
legacy(&buff[0]); // valid!
You should use a std::vector and not an array.
Real arrays require you to specify their size so that the compiler can create some space for them -- this is why the compiler complains when you don't supply a constant integer. Dynamic arrays are represented by a pointer to the base of the array -- and you have to retrieve the memory for the dynamic array yourself. You may then use the pointer with subscript notation. e.g.,
int * x;
x = (int *) malloc( sizeof(int) *
getAmountOfArrayElements() /* non-const result*/
);
x[5] = 10;
This leads to two types of problems:
Buffer over/under flows : you might subscript-index past either end of the array.
You might forget to release the memory.
Vector provides a nice little interface to hide these problems from you -- if used correctly.
Replace
char buff[file_size];
with
char *buff = new char[file_size];
and once the use of the buff is done..you can free the memory using:
delete[] buff;
There are two points in your question I'd like to cover.
The actual question, how do you create the array. Johannes answered this. You use a std::vector and create it with a size allocation.
Your error message. When you declare an array of some type, you must declare it with a constant size. So for example
const int FileSize = 1000;
// stuff
char buffer[FileSize];
is perfectly legitimate.
On the other hand, what you did, attempting to declare an array with variable size, and then not allocating with new, generates an error.
Problem is that buff needs be created on the heap (instead of stack). Compiler want s to know the exact size to create on the stack.
char* buff = new char[file_size];