I just noticed that QList doesn't have a resize method, while QVector, for example, has one. Why is this? And is there an equivalent function?
Well, this is the more generic answer, but I hope you will see, by comparising QList and QVector why there is no need of manually expanding the container.
QList is using internal buffer to save pointers to the elements (or, if the element is smaller than pointer size, or element is one of the shared classes - elements itself), and the real data will be kept on the heap.
During the time, removing the data will not reduce internal buffer (empty space will be filled by shifting left or right elements, leaving space on the beginning and the end for later insertions).
Appending items, like QVector will create additional new space on end of the array, and since, unlike QVector, real data is not stored in internal buffer, you can create a lot of space in single instruction, no matter what size of the item is (unlike QVector) - because you are simply adding pointers into indexing buffer.
For example, if you are using 32bit system (4 bytes per pointer) and you are storing 50 items in the QList, and each item is 1MB big, QVector buffer will need to be resized to 50MB, and QList's internal buffer is need to allocate only 200B of memory. This is where you need to call resize() in QVector, but in QList there is no need, since allocating small chunk of memory is not problematic, as allocating 50MB of memory.
However, there is a price for that which means that you sometimes you want to preffer QVector instead of QList: For single item stored in the QList, you need one additional alloc on the heap - to keep the real data of the item (data where pointer in the internal buffer is pointing to). If you want to add 10000 items larger than the pointer (because, if it can fit into pointer, it will be stored directly in the internal buffer), you will need 10000 system calls to allocate data for 10000 items on the heap. But, if you are using QVector, and you call resize, you are able to fit all the items in the single alloc call - so don't use QList if you need a lot of inserting or appending, prefer QVector for that. Of course, if you are using QList to store shared classes, there is no need for additional allocating, which again makes QList more suitable.
So, prefer QList for most of the cases as it is:
Using indices to access the individual elements, accessing items will be faster that QLinkedList
Inserting into middle of the list will only require moving of the pointers to create space, and it is faster than shifting actual QVector data around.
There is no need to manually reserve or resize space, as empty space will be moved to the end of the buffer for later use, and allocating space in the array is very fast, as the elements are very small, and it can allocate a lot of space without killing your memory space.
Don't use it in the following scenarios, and prefer QVector:
If you need to ensure that your data is stored in the sequential memory locations
If you are rarely inserting data at the random positions, but you are appending a lot of data
at the end or beginning, which can cause a lot of unnecessary system calls, and you still need fast indexing.
If you are looking for (shared) replacement for simple arrays which will not grow over the time.
And, finally, note: QList (and QVector) have reserve(int alloc) function which will cause QList's internal buffer to grow if alloc is greater than the current size of the internal buffer. However, this will not affect external size of the QList (size() will always return the exact number of elements contained in the list).
I think reason is because QList doesn't require the element type to have a default constructor.
As a result of this, there is no operation where QList ever creates an object it only copies them.
But if you really need to resize a QList (for whatever reason), here's a function that will do it. Note that it's just a convenience function, and it's not written with performance in mind.
template<class T>
void resizeList(QList<T> & list, int newSize) {
int diff = newSize - list.size();
T t;
if (diff > 0) {
list.reserve(newSize);
while (diff--) list.append(t);
} else if (diff < 0) list.erase(list.end() + diff, list.end());
}
wasle answer is good, but it'll add the same object multiple time. Here is an utility functions that will add different object for list of smart pointers.
template<class T>
void resizeSmartList(QList<QSharedPointer<T> > & list, int newSize) {
int diff = newSize - list.size();
if (diff > 0) {
list.reserve(diff);
while (diff>0){
QSharedPointer<T> t = QSharedPointer<T>(new T);
list.append(t);
diff--;
}
}else if (diff < 0) list.erase(list.end() + diff, list.end());
}
For use without smart pointers, the following will add different objects to your list.
template<class T>
void resizeList(QList<T> & list, int newSize) {
int diff = newSize - list.size();
if (diff > 0) {
list.reserve(diff);
while (diff>0){
T t = new T;
list.append(t);
diff--;
}
}else if (diff < 0) list.erase(list.end() + diff, list.end());
}
Also remember that your objects must have default constructor (constructor declared in the header with arg="someValue") or else it will fail.
Just use something like
QList<Smth> myList;
// ... some operations on the list here
myList << QVector<Smth>(desiredNewSize - myList.size()).toList();
Essentially, there are these to/from Vector/List/Set() methods everywhere, which makes it trivial to resize Qt containers when necessary in a somewhat manual, but trivial and effective (I believe) way.
Another (1 or 2-liner) solution would be:
myList.reserve(newListSize); // note, how we have to reserve manually
std::fill_n(std::back_inserter(myList), desiredNewSize - myList.size(), Smth());
-- that's for STL-oriented folks :)
For some background on how complex an effective QList::resize() may get, see:
bugreports.qt.io/browse/QTBUG-42732 , and
codereview.qt-project.org/#/c/100738/1//ALL
Related
I am making a game engine and need to use the std::vector container for all of the components and entities in the game.
In a script the user might need to hold a pointer to an entity or component, perhaps to continuously check some kind of state. If something is added to the vector that the pointer points to and the capacity is exceeded, it is my understanding that the vector will allocate new memory and every pointer that points to any element in the vector will become invalid.
Considering this issue i have a couple of possible solutions. After each push_back to the vector, would it be a viable to check if a current capacity variable is exceeded by the actual capacity of the vector? And if so, fetch and overwrite the old pointers to the new ones? Would this guarantee to "catch" every case that invalidates pointers when performing a push_back?
Another solution that i've found is to instead save an index to the element and access it that way, but i suspect that is bad for performance when you need to continuously check the state of that element (every 1/60 second).
I am aware that other containers do not have this issue but i'd really like to make it work with a vector. Also it might be worth noting that i do not know in advance how many entities / components there will be.
Any input is greatly appreciated.
You shouldn't worry about performance of std::vector when you access its element only 60 times per second. By the way, in Release compilation mode std::vector::operator[] is being converted to a single lea opcode. In Debug mode it is decorated by some runtime range checks though.
If the user is going to store pointers to the objects, why even contain them in a vector?
I don't feel like it is a good idea to (poor wording)->store pointers to objects in a vector. (what I meant is to create pointers that point to vector elements, i.e. my_ptr = &my_vec[n];) The whole point of a container is to reference the contents in the normal ways that the container supports, not to create outside pointers to elements of the container.
To answer your question about whether you can detect the allocations, yes you could, but it is still probably a bad idea to reference the contents of a vector by pointers to elements.
You could also reserve space in the vector when you create it, if you have some idea of what the maximum size might grow to. Then it would never resize.
edit:
After reading other responses, and thinking about what you asked, another thought occurred. If your vector is a vector of pointers to objects, and you pass out the pointers to the objects to your clients, resizing the vector does not invalidate the pointers that the vector hold. The issue becomes keeping track of the life of the object (who owns it), which is why using shared_ptr would be useful.
For example:
vector<shared_ptr> my_vec;
my_vec.push_back(stuff);
if you pass out the pointers contained in the vector to clients...
client_ptr = my_vec[3];
There will be no problem when the vector resizes. The contents of the vector will be preserved, and whatever was at my_vec[3] will still be there. The object pointed to by my_vec[3] will still be at the same address, and my_vec[3] will still contain that address. Whomever got a copy of the pointer at my_vec[3] will still have a valid pointer.
However, if you did this:
client_ptr = &my_vec[3];
And the client is dereferencing like this:
*client_ptr->whatever();
You have a problem. Now when my_vec resized, &my_vec[3] is probably no longer valid, thus client_ptr points to nowhere.
If something is added to the vector that the pointer points to and the
capacity is exceeded, it is my understanding that the vector will
allocate new memory and every pointer that points to any element in
the vector will become invalid.
I once wrote some code to analyze what happens when a vector's capacity is exceeded. (Have you done this, yet?) What that code demonstrated on my Ubuntu with g++v5 system was that std::vector code simply a) doubles the capacity, b) moves all the elements from old to the new storage, then c) cleans up the old. Perhaps your implementation is similar. I think the details of capacity expansion is implementation dependent.
And yes, any pointer into the vector would be invalidated when push_back() causes capacity to be exceeded.
1) I simply don't use pointers-into-the-vector (and neither should you). In this way the issue is completely eliminated, as it simply can not occur. (see also, dangling pointers) The proper way to access a std::vector (or a std::array) element is to use an index (via the operator[]() method).
After any capacity-expansion, the index of all elements at indexes less than the previous capacity limit are still valid, as the push_back() installed the new element at the 'end' (I think highest memory addressed.) The elements memory location may have changed, but the element index is still the same.
2) It is my practice that I simply don't exceed the capacity. Yes, by that I mean that I have been able to formulate all my problems such that I know the required maximum-capacity. I have never found this approach to be a problem.
3) If the vector contents can not be contained in system memory (my system's best upper limit capacity is roughly 3.5 GBytes), then perhaps a vector container (or any ram based container) is inappropriate. You will have to accomplish your goal using disk storage, perhaps with vector containers acting as a cache.
update 2017-July-31
Some code to consider from my latest Game of Life.
Each Cell_t (on the 2-d gameboard) has 8 neighbors.
In my implementation, each Cell_t has a neighbor 'list,' (either std::array or std::vector, I've tried both), and after the gameboard has fully constructed, each Cell_t's init() method is run, filling it's neighbor 'list'.
// see Cell_t data attributes
std::array<int, 8> m_neighbors;
// ...
void Cell_t::void init()
{
int i = 0;
m_neighbors[i] = validCellIndx(m_row-1, m_col-1); // 1 - up left
m_neighbors[++i] = validCellIndx(m_row-1, m_col); // 2 - up
m_neighbors[++i] = validCellIndx(m_row-1, m_col+1); // 3 - up right
m_neighbors[++i] = validCellIndx(m_row, m_col+1); // 4 - right
m_neighbors[++i] = validCellIndx(m_row+1, m_col+1); // 5 - down right
m_neighbors[++i] = validCellIndx(m_row+1, m_col); // 6 - down
m_neighbors[++i] = validCellIndx(m_row+1, m_col-1); // 7 - down left
m_neighbors[++i] = validCellIndx(m_row, m_col-1); // 8 - left
// ^^^^^^^^^^^^^- returns info to quickly find cell
}
The int value in m_neighbors[i] is the index into the gameboard vector. To determine the next state of the cell, the code 'counts the neighbor's states.'
Note - Some cells are at the edge of the gameboard ... in this implementation, validCellIndx() can return a value indicating 'no-neighbor', (above top row, left of left edge, etc.)
// multiplier: for 100x200 cells,20,000 * m_generation => ~20,000,000 ops
void countNeighbors(int& aliveNeighbors, int& totalNeighbors)
{
{ /* ... initialize m_count[]s to 0 */ }
for(auto neighborIndx : m_neighbors ) { // each of 8 neighbors // 123
if(no_neighbor != neighborIndx) // 8-4
m_count[ gBoard[neighborIndx].m_state ] += 1; // 765
}
aliveNeighbors = m_count[ CellALIVE ]; // CellDEAD = 1, CellALIVE
totalNeighbors = aliveNeighbors + m_count [ CellDEAD ];
} // Cell_Arr_t::countNeighbors
init() pre-computes the index to this cells neighbors. The m_neighbors array holds index integers, not pointers. It is trivial to have NO pointers-into-the-gameboard vector.
I'm merging many objects into a single vector containing render data (a mesh). This vector gets cleared and refilled on each frame (well, almost).
The issue is that clearing and then again reserving the vector size has a huge impact on performance in my case, because clear() may also change the capacity.
In other words, I need to control when the capacity of the vector gets changed. I want to keep the old capacity for quite some time, until I decide myself that it's time to change it.
I see two options:
Figure out how to control when the capacity of std::vector is about to change
Implement a memory pool for large memory objects which will fetch a large data object of <= required size and reuse / release it as I need.
Update
In addition, what if for example a resize(10), and later a resize(5) was called (just for illustration, multiply actual numbers by some millions)?
Will the later call to resize(5) cause the vector to, maybe, reallocate?
Actually the clear member function keeps the vector capacity unchanged. It only destroys (calls the destructor) each of the vector elements and sets the vector size to 0.
In this situation, at each iteration, I would call clear() to destroy all the vector elements, then call the member function reserve(size) which, in the case where the vector capacity is too small, will increase it to at least size.
This vector gets cleared and refilled on each frame (well, almost).
I would recommend a different approach.
Create a class that acts as a buffer for the rendering data.
If I am not mistaken, you never reduce the capacity of the buffer. You only increase its capacity when needed.
Make sure that class is an implementation detail and only instance is ever constructed.
Here's a skeletal implementation of what I am thinking.
namespace Impl_Detail
{
struct Buffer
{
size_t capacity;
size_t size;
std::vector<char> data;
// Pick whatever default capacity makes sense for your need.
Buffer(size_t cap = 100) : capacity_(cap), size_(0), data(cap) {}
void ensureCapacity(size_t cap)
{
if ( capacity_ < cap )
{
capacity_ = cap;
data.resize(capacity_);
}
}
// Add any other helpful member functions as needed.
};
// Create an instance and use it in the implementation.
Buffer theBuffer;
}
I'm using list of lists to store points data in my appliation.
Here some examples test I made:
//using list of lists
list<list<Point>> ls;
for(int i=0;i<10000;++i)
{
list<Point> lp;
lp.resize(4);
lp.pushback(Point(1,2));
ls.push_back(lp);
}
I asume that memory used will be
10k elements * 5 Points * Point size = 10000*5*2*4=400.000 bytes + some overhead of list container, but memory used by programm rises dramatically.
Is it due to overhead of list container or maybe because of memory fragmentation?
EDIT:
add some info and another example
Point is mfc CPoint class or you can define your own point class with int x,y , I'm using VS2008 in debug mode, Win XP, and Window Task Manager to view memory of application
I can't use vector instead of outer list because I don't know total size N of it beforehand, so I must push_back every new entry.
here is modified example
int N=10000;
list<vector<CPoint>> ls;
for(int i=0;i<N;++i)
{
vector<CPoint> vp;
vp.resize(5);
vp.reserve(5);
ls.push_back(vp);
}
and I compare it to
CPoint* p= new CPoint[N*5];
It's not "+ some overhead of list container". List overhead is linear with the number of objects, not constant. There's 50,000 Points, but with each Point you also have two pointers (std::list is doubly-linked), and also with each element in ls, you have two pointers. Plus, each list is going to have a head and tail pointer.
So that's 140,002 (I think) extra pointers that your math doesn't account for. Note that this dwarfs the size of the Point objects themselves, since they're so small. You sure that list is the right container for you? vector has constant overhead - basically three pointer per container, which would be just 30,003 additional pointers on top of just the Point objects. That's a large memory savings - if that is something that matters.
[Update based on Bill Lynch's comment] vector could allocate more space than 5 for your points. Worst-case, it will allocate twice as much space as you need. But since sizeof(Point) == sizeof(Point*) for you, that's still strictly better than list since list will always use three times as much space.
I have an assignment that requires me to sort a heap based C style array of names as they're being read rather than reading them all and then sorting. This involves a lot of shifting the contents of the array by one to allow new names to be inserted. I'm using the code below but it's extremely slow. Is there anything else I could be doing to optimize it without changing the type of storage?
//the data member
string *_storedNames = new string[4000];
//together boundary and index define the range of elements to the right by one
for(int k = p_boundary - 1;k > index;k--)
_storedNames[k]=_storedNames[k - 1];
EDIT2:
As suggested by Cartroo I'm attempting to use memmove with the dynamic data that uses malloc. Currently this shifts the data correctly but once again fails in the deallocation process. Am I missing something?
int numberOfStrings = 10, MAX_STRING_SIZE = 32;
char **array = (char **)malloc(numberOfStrings);
for(int i = 0; i < numberOfStrings; i++)
array[i] = (char *)malloc(MAX_STRING_SIZE);
array[0] = "hello world",array[2] = "sample";
//the range of data to move
int index = 1, boundary = 4;
int sizeToMove = (boundary - index) * sizeof(MAX_STRING_SIZE);
memcpy(&array[index + 1], &array[index], sizeToMove);
free(array);
If you're after minimal changes to your approach, you could use the memmove() function, which is potentially faster than your own manual version. You can't use memcpy() as advised by one commenter as the areas of memory aren't permitted to overlap (the behaviour is undefined if they do).
There isn't a lot else you can do without changing the type of your storage or your algorithm. However, if you change to using a linked list then the operation becomes significantly more efficient, although you will be doing more memory allocation. If the allocation is really a problem (and unless you're on a limited embedded system it probably isn't) then pool allocators or similar approaches could help.
EDIT: Re-reading your question, I'm guessing that you're not actually using Heapsort, you just mean that your array was allocated on the heap (i.e. using malloc()) and you're doing a simple insertion sort. In which case, the information below isn't much use to you directly, although you should be aware that insertion sort is quite inefficient compared to a bulk insert followed by a better sorting algorithm (e.g. Quicksort which you can implement using the standard library qsort() function). If you only ever need the lowest (or highest) item instead of full sorted order then Heapsort is still useful reading.
If you're using a standard Heapsort then you shouldn't need this operation at all - items are appended at the end of the array, and then the "heapify" operation is used to swap them into the correct position in the heap. Each swap just requires a single temporary variable to swap two items - it doesn't require anything to be shuffled down as in your code snippet. It does require everything in the array to be the same size (either a fixed size in-place string or, more likely, a pointer), but your code already seems to assume that anyway (and using variable length strings in a standard char array would be a pretty strange thing to be doing).
Note that strictly speaking Heapsort operates on a binary tree. Since you're dealing with an array I assume you're using the implementation where a contiguous array is used where the children of node at index n are stored at indicies 2n and 2n+1 respectively. If this isn't the case, or you're not using a Heapsort at all, you should explain in more detail what you're trying to do to get a more helpful answer.
EDIT: The following is in response to you updated code above.
The main reason you see a problem during deallocation is if you trampled some memory - in other words, you're copying something outside the size of the area you allocated. This is a really bad thing to do as you overwrite the values that the system is using to track your allocations and cause all sorts of problems which usually result in your program crashing.
You seem to have some slight confusion as to the nature of memory allocation and deallocation, first of all. You allocate an array of char*, which on its own is fine. You then allocate arrays of char for each string, which is also fine. However, you then just call free() for the initial array - this isn't enough. There needs to be a call to free() to match each call to malloc(), so you need to free each string that you allocate and then free the initial array.
Secondly, you set sizeToMove to a multiple of sizeof(MAX_STRING_SIZE), which almost certainly isn't what you want. This is the size of the variable used to store the MAX_STRING_SIZE constant. Instead you want sizeof(char*). On some platforms these may be the same, in which case things will still work, but there's no guarantee of that. For example, I'd expect it to work on a 32-bit platform (where int and char* are the same size) but not on a 64-bit platform (where they're not).
Thirdly, you can't just assign a string constant (e.g. "hello world") to an allocated block - what you're doing here is replacing the pointer. You need to use something like strncpy() or memcpy() to copy the string into the allocated block. I suggest snprintf() for convenience because strncpy() has the problem that it doesn't guarantee to nul-terminate the result, but it's up to you.
Fourthly, you're still using memcpy() and not memmove() to shuffle items around.
Finally, I've just seen your comment that you have to use new and delete. There is no equivalent of realloc() for these, but that's OK if everything is known in advance. It looks like what you're trying to do is something like this:
bool addItem(const char *item, char *list[], size_t listSize, size_t listMaxSize)
{
// Check if list is full.
if (listSize >= listMaxSize) {
return false;
}
// Insert item inside list.
for (unsigned int i = 0; i < listSize; ++i) {
if (strcmp(list[i], item) > 0) {
memmove(list + i + 1, list + i, sizeof(char*) * (listSize - i));
list[i] = item;
return true;
}
}
// Append item to list.
list[listSize] = item;
return true;
}
I haven't compiled and checked that, so watch out for off-by-one errors and the like, but hopefully you get the idea. This function should work whether you use malloc() and free() or new and delete, but it assumes that you've already copied the string item into an allocated buffer that you will keep around, because of course it just stores a pointer.
Remember that of course you need to update listSize yourself outside this function - this just inserts an item into the correct point in the array for you. If the function returns true then increment your copy of listSize by 1 - if it returns false then you didn't allocate enough memory so your item wasn't added.
Also note that in C and C++, for the array list the syntax &list[i] and list + i are totally equivalent - use the first one instead within the memmove() call if you find it easier to understand.
I think what you're looking for is heapsort: http://en.wikipedia.org/wiki/Heapsort#Pseudocode
An array is a common way to implement a binary search tree (i.e. a tree in which left children are smaller than the current node and right children are larger than the current node).
Heapsort sorts an array of a specified length. In your case, since the size of the array is going to increase "online", all you need to do is to call change the input size that you pass to heapsort (i.e. increase the number of elements being considered by 1).
Since your array is sorted and you can't use any other data structure your best bet is likely to perform a binary search, then to shift the array up one to free space at the position for insertion and then insert the new element at that position.
To minimise the cost of shifting the array, you could make it an array of pointers to string:
string **_storedNames = new string*[4000];
Now you can use memmove (although you might find now that an element-by-element copy is fast enough). But you will have to manage the allocation and deletion of the individual strings yourself, and this is somewhat error-prone.
Other posters who recommend using memmove on your original array don't seem to have noticed that each array element is a string (not a string* !). You can't use memmove or memcpy on a class like this.
I have an array int *playerNum which stores the list of all the numbers of the players in the team. Each slot e.g playerNum[1]; represents a position on the team, if I wanted to add a new player for a new position on the team. That is, inserting a new element into the array somewhere near the middle, how would I go about doing this?
At the moment, I was thinking you memcpy up to the position you want to insert the player into a new array and then insert the new player and copy over the rest of it?
(I have to use an array)
If you're using C++, I would suggest not using memcpy or memmove but instead using the copy or copy_backward algorithms. These will work on any data type, not just plain old integers, and most implementations are optimized enough that they will compile down to memmove anyway. More importantly, they will work even if you change the underlying type of the elements in the array to something that needs a custom copy constructor or assignment operator.
If you have to use an array, after having made sure you have enough storage (using realloc if necessary), use memmove to shift the items from the insertion point to the end by one position, then save your new player at the desired location.
You can't use memcpy if the source and target areas overlap.
This will fail as soon as the objects in your array have non-trivial copy-constructors, and it's not idiomatic C++. Using one of the container classes is much safer (std::vector or std::list for instance).
Your solution using memcpy is correct (under few assumptions mentionned by other).
However, and since you are programming in C++. It is probably a better choice to use std::vector and its insert method.
vector<int> myvector (3,100);
myvector.insert ( 10 , 42 );
An array takes a contiguous block of memory, there is no function for you to insert an element in the middle. you can create a new one of size larger than the origin's by one then copy the original array into the new one plus the new member
for(int i=0;i<arSize/2;i++)
{
newarray[i]<-ar[i];
}
newarray[i+1]<-newelemant;
for(int j=i+1<newSize;j++,i++)
{
newarray[i]<-ar[i];
}
if you use STL, ting becomes easier, use list.
As you're talking about an array and "insert" I assume that it is a sorted array. You don't necessarily need a second array provided that the capacity N of your existing array is large enough to store more entries (N>n, where n is the number of current entries). You can move the entries from k to n-1 (zero-indexed) to k+1 to n, where k is the desired insert position. Insert the new element at index position k and increase n by one. If the array is not large enough in the beginning, you can follow your proposed approach or just reallocate a new array of larger capacity N' and copy the existing data before applying the actual insert operation described above.
BTW: As you're using C++, you could easily use std::vector.
While it is possible to use arrays for this, C++ has a better solutions to offer. For starters, try std::vector, which is a decent enough general-purpose container, based on a dynamically-allocated array. It behaves exactly like an array in many cases.
Looking at your problem, however, there are two downsides to arrays or vectors:
Indices have to be 0-based and contiguous; you cannot remove elements from the middle without losing key/value associations for everything after the removed element; so if you remove the player on position 4, then the player from position 9 will move to position 8
Random insertion and deletion (that is, anywhere except the end) is expensive - O(n), that is, execution time grows linearly with array size. This is because every time you insert or delete, a part of the array needs to be moved.
If the key/value thing isn't important to you, and insertion/deletion isn't time critical, and your container is never going to be really large, then by all means, use a vector. If you need random insertion/deletion performance, but the key/value thing isn't important, look at std::list (although you won't get random access then, that is, the [] operator isn't defined, as implementing it would be very inefficient for linked lists; linked lists are also very memory hungry, with an overhead of two pointers per element). If you want to maintain key/value associations, std::map is your friend.
Losting the tail:
#include <stdio.h>
#define s 10
int L[s];
void insert(int v, int p, int *a)
{
memmove(a+p+1,a+p,(s-p+1)*4);
*(a+p) = v;
}
int main()
{
for(int i=0;i<s;i++) L[i] = i;
insert(11,6, L);
for(int i=0;i<s;i++) printf("%d %d\n", L[i], &L[i]);
return 0;
}