Cout a whole array in c++ - c++

I am fairly new to c++, is there a way in c++ through which we can cout a whole static array apart from iterating via a for loop?
int arra[10] = {1,2,3,4};
std::cout << arra << std::endl;
I tried this but, this is printing address of the first element in the array.

Following doesn't use (explicitly) loop:
std::copy(std::begin(arra),
std::end(arra),
std::ostream_iterator<int>(std::cout, "\n"));
but loop seems simpler to read/write/understand:
for (const auto& e : arra) {
std::cout << e << std::endl;
}

#include<iostream>
using namespace std;
int main(){
int i;
int myarr[5]={9,84,7,55,6};
for(i=0;i<5;i++){
cout<<myarr[i]<<endl;
}
}

Some how you are going to have to visit each element of the array to display the contents. This can be done long form or use a loop. Fortunately we can use std::copy to fide the loop and display the array.
int arr[] = {1,2,3,4,5};
std::copy(std::begin(arr), std::end(arr), std::ostream_iterator<int>(std::cout, " "));
Live Example

You need to either loop over the array
int arra[10] = {1,2,3,4};
for (int i = 0; i<sizeof(arra)/sizeof(arra[0]); ++i)
{
std::cout << arra[i] << std::endl;
}
or use
std::copy(std::begin(arra), std::end(arra), std::ostream_iterator<int>(std::cout,"\n"));

As you asked to do this without an Array, you could easily do this:
std::copy(arra, arra + 10,
std::ostream_iterator<int>(cout, "\n"));
If you want to write good code, you could use std::array and then simply write arra.begin() and arra.end().

There are basically two ways. First one is a loop somewhere . The loop can be explicit - in your code - or it can be implicit through library. Example of library loop:
std::for_each(cbegin(arra), cend(arra), [](int i) {std::cout << "i ";});
The second way of printing the array is with the use of recursion. Here is example of the code:
void print_element(const int* head, const int* tail) {
if (head == tail)
return;
std::cout << *head << " ";
print_element(head + 1, tail);
}
....
print_element(arr, arr + sizeof(arr) / sizeof(*arr));
Couple of words about recursion solution. Depending on your optimization, it can produce different results. Performance of the recursion will be roughly equivalent to the performance of the loop on any optimization level with AMD64 ABI. The reason for that is that arguments to functions are passed through registers (not pushed into stack), and the frame pointers are not used with any optimization. With this in mind, the only CALL/RET (which push/pop RIP) will slow down execution compared the loop, but this degradation is not measurable. The real issue, however, is that there is limited number of recursed calls which can be made on a given system (usually around single thousands), so printing an array of 1000000 elements is guaranteed to crash the application.
With higher levels of optimization which involve tail-call optimization, the recursion calls will be translated into plain jmps and the performance will be exactly the same as one with the loop. It will also eliminate the problem of maximum recursion level - so that arrays of any sizes can be printed.

It is printing address because you are pointing to an array, not its elements.
try this-
void printArray(int arr[], int n)
/* n is size here*/
{
for (int i = 0; i < n; i++)
cout << arr[i] << " ";
}

Related

How come my vector array won't output anything after I erase an element?

Recently I've started learning C++, and everyday I do a C++ practice exercise to understand the language more. Today I was learning Vector Arrays and I hit a roadblock.
I'm trying to make a simple program that takes an array, puts it into a vector, then removes all the odd numbers. But for some reason when I erase an element from the vector, and output the modified vector, it doesn't output anything.
If somebody could guide me to the right direction on what I'm doing wrong, that would be great!
remove.cpp
#include <iostream>
#include <vector>
using namespace std;
class removeOddIntegers {
public:
void removeOdd(int numbs[]) {
vector<int> removedOdds;
for(int i = 0; i < 10; ++i) {
removedOdds.push_back(numbs[i]);
}
for(auto i = removedOdds.begin(); i != removedOdds.end(); ++i) {
if(*i % 2 == 1) {
removedOdds.erase(removedOdds.begin() + *i);
std::cout << "Removed: " << *i << endl;
}
}
for(auto i = removedOdds.begin(); i != removedOdds.end(); ++i) {
std::cout << *i << endl; //doesn't output anything.
}
}
};
main.cpp
#include <iostream>
#include "remove.cpp"
using namespace std;
int main() {
removeOddIntegers r;
int numbers[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
r.removeOdd(numbers);
return 0;
}
Now, I understand that I could just filter through the array, and only push_back the even numbers to the vector, and quite frankly, that works like a charm. But I want to understand why my method doesn't work. How come when I remove an element from the vector, it just fails to output anything?
Thanks in advance!
There's a few things wrong, but they mostly boil down to the same fundamental issue. You are violating iterator guarantees of std::vector::erase:
Invalidates iterators and references at or after the point of the erase, including the end() iterator.
You do this both when dereferencing the deleted iterator to display your "removed" message, and also when calling ++i for the loop.
In addition, your call removedOdds.erase(removedOdds.begin() + *i); is wrong, because it's using the actual value in the vector as an offset from the beginning. That assumption is completely wrong.
The proper way to erase an element at an iterator and retain a valid iterator is:
i = removedOdds.erase(i);
Here is your loop with minimum changes required to fix it:
for (auto i = removedOdds.begin(); i != removedOdds.end(); ) {
if (*i % 2 == 1) {
std::cout << "Removed: " << *i << endl;
i = removedOdds.erase(i);
} else {
++i;
}
}
Notice how the iterator is advanced in the body of the loop now. You can do a thought experiment to think about why. Or you can try doing it the wrong way and use an input like { 1, 3, 5, 7, 9 } to demonstrate the problem.
This is still not the idiomatic way to remove elements from a vector. As you alluded to, elements should be swapped to the end of the vector. The reason for this is that std::vector::erase is a linear operation that must shuffle the entire remainder of the vector. If you do this multiple times, you essentially have O(N^2) time complexity.
The recommended approach is to use std::remove_if:
removedOdds.erase(removedOdds.begin(),
std::remove_if(removeOdds.begin(), removeOdds.end(),
[](int n) { return n % 2 == 1; }));
The flaw in the shown algorithm is more easily observed with a much simpler example:
for(int i = 0; i < 2; ++i) {
removedOdds.push_back(numbs[i]);
}
This initializes the vector with just two values: 0 and 1. This is small enough to be able to follow along in your head, as you mentally execute the shown code:
for(auto i = removedOdds.begin(); i != removedOdds.end(); ++i) {
Nothing interesting will happen with the first value, 0, that gets iterated here. ++i increments the iterator to point to the value 1, then:
if(*i % 2 == 1) {
removedOdds.erase(removedOdds.begin() + *i);
std::cout << "Removed: " << *i << endl;
}
This time erase() removes 1 from the vector. But that's what i is also pointing to, of course. Then, if you look in your C++ reference, you will discover that std::vector::erase:
invalidates iterators and references at or after the point of the erase,
including the end() iterator.
i is now "at the point of the erase", therefore, i is no longer a valid iterator, any more. Any subsequent use of i becomes undefined behavior.
And, i immediately gets used, namely incremented in the for loop iteration expression. That's your undefined behavior.
With the original vector containing values 0 through 9: if you use your debugger it will show all sorts of interesting kinds of undefined behavior. You can use your debugger to see if the shown code ever manages to survive when it encounters a higher odd value, like 7 or 9. If it does, at that point this vector will obviously be much, much smaller, but removedOdds.erase(removedOdds.begin() + *i); will now attempt to remove the 7th or the 9th value in a vector that's about half its size now, a completely non-existent value in the vector, with much hilarity ensuing.
To summarize: your "method doesn't work" because the algorithm is fundamentally flawed in multiple ways, and the reason you get "no output" is because the program crashes.

Experiment with find algorithm using sentinel

I was experimenting with some known algorithm which aims to reduce the number of comparisons in an operation of finding element in an unsorted array. The algorithm uses sentinel which is added to the back of the array, which allows to write a loop where we use only one comparison, instead of two. It's important to note that the overall Big O computational complexity is not changed, it is still O(n). However, when looking at the number of comparisons, the standard finding algorithm is so to say O(2n) while the sentinel algorithm is O(n).
Standard find algorithm from the c++ library works like this:
template<class InputIt, class T>
InputIt find(InputIt first, InputIt last, const T& value)
{
for (; first != last; ++first) {
if (*first == value) {
return first;
}
}
return last;
}
We can see two comparisons there and one increment.
In the algorithm with sentinel the loop looks like this:
while (a[i] != key)
++i;
There is only one comparison and one increment.
I made some experiments and measured time, but on every computer the results were different. Unfortunately I didn't have access to any serious machine, I only had my laptop with VirtualBox there with Ubuntu, under which I compiled and run the code. I had a problem with the amount of memory. I tried using online compilers like Wandbox and Ideone but the time limits and memory limits didn't allow me to make reliable experiments. But every time I run my code, changing the number of elements in my vector or changing the number of execution of my test, I saw different results. Sometimes the times were comparable, sometimes std::find was significantly faster, sometimes significantly faster was the sentinel algorithm.
I was surprised because the logic says that the sentinel version indeed should work faster and every time. Do you have any explanation for this? Do you have any experience with this kind of algorithm? Is it worht the effort to even try to use it in production code when performance is crucial and when the array cannot be sorted (and any other mechanism to solve this problem, like hashmap, indexing etc., cannot be used)?
Here's my code of testing this. It's not beautiful, in fact it is ugly, but the beauty wasn't my goal here. Maybe something is wrong with my code?
#include <iostream>
#include <algorithm>
#include <chrono>
#include <vector>
using namespace std::chrono;
using namespace std;
const unsigned long long N = 300000000U;
static void find_with_sentinel()
{
vector<char> a(N);
char key = 1;
a[N - 2] = key; // make sure the searched element is in the array at the last but one index
unsigned long long high = N - 1;
auto tmp = a[high];
// put a sentinel at the end of the array
a[high] = key;
unsigned long long i = 0;
while (a[i] != key)
++i;
// restore original value
a[high] = tmp;
if (i == high && key != tmp)
cout << "find with sentinel, not found" << endl;
else
cout << "find with sentinel, found" << endl;
}
static void find_with_std_find()
{
vector<char> a(N);
int key = 1;
a[N - 2] = key; // make sure the searched element is in the array at the last but one index
auto pos = find(begin(a), end(a), key);
if (pos != end(a))
cout << "find with std::find, found" << endl;
else
cout << "find with sentinel, not found" << endl;
}
int main()
{
const int times = 10;
high_resolution_clock::time_point t1 = high_resolution_clock::now();
for (auto i = 0; i < times; ++i)
find_with_std_find();
high_resolution_clock::time_point t2 = high_resolution_clock::now();
auto duration = duration_cast<milliseconds>(t2 - t1).count();
cout << "std::find time = " << duration << endl;
t1 = high_resolution_clock::now();
for (auto i = 0; i < times; ++i)
find_with_sentinel();
t2 = high_resolution_clock::now();
duration = duration_cast<milliseconds>(t2 - t1).count();
cout << "sentinel time = " << duration << endl;
}
Move the memory allocation (vector construction) outside the measured functions (e.g. pass the vector as argument).
Increase times to a few thousands.
You're doing a whole lot of time-consuming work in your functions. That work is hiding the differences in the timings. Consider your find_with_sentinel function:
static void find_with_sentinel()
{
// ***************************
vector<char> a(N);
char key = 1;
a[N - 2] = key; // make sure the searched element is in the array at the last but one index
// ***************************
unsigned long long high = N - 1;
auto tmp = a[high];
// put a sentinel at the end of the array
a[high] = key;
unsigned long long i = 0;
while (a[i] != key)
++i;
// restore original value
a[high] = tmp;
// ***************************************
if (i == high && key != tmp)
cout << "find with sentinel, not found" << endl;
else
cout << "find with sentinel, found" << endl;
// **************************************
}
The three lines at the top and the four lines at the bottom are identical in both functions, and they're fairly expensive to run. The top contains a memory allocation and the bottom contains an expensive output operation. These are going to mask the time it takes to do the real work of the function.
You need to move the allocation and the output out of the function. Change the function signature to:
static int find_with_sentinel(vector<char> a, char key);
In other words, make it the same as std::find. If you do that, then you don't have to wrap std::find, and you get a more realistic view of how your function will perform in a typical situation.
It's quite possible that the sentinel find function will be faster. However, it comes with some drawbacks. The first is that you can't use it with immutable lists. The second is that it's not safe to use in a multi-threaded program due to the potential of one thread overwriting the sentinel that the other thread is using. It also might not be "faster enough" to justify replacing std::find.

how to make my loop fixed depending on the array elements| C++

I have an array and a for loop. I want the for loop to stop depending on the number of elements the array has.
For example if I have an int array []={1,0,1,0,1}
I want the loop to execute code 5 times. Similar to the function for strings .length() but for integers. An example with a simple code would be the best answer :)
like this pseudocode:
for(int b=0;b<array-length;b++)
Unless you need the index, the following works fine:
int ar[] = { 1, 2, 3, 4, 5 };
for (auto i : ar) {
std::cout << i << std::endl;
}
Since the question is tagged with C++, I'll have to suggest std::vector as the best solution. (Also for the future)
Look into this: std::vector
So for you this'd be like the following:
std::vector<int> array {1,0,1,0,1};
for(int i = 0; i < array.size(); i++)
...
Or in the worst case an std::array if you don't want the features of a vector.
See also: std::array
To find length of an array, use this code
int array[5];
std::cout << "Length of array = " << (sizeof(array)/sizeof(*array)) << std::endl;
So in your case, it will be for example:
int array[5];
for(int b=0; b < sizeof(array)/sizeof(*array); b++){
std::cout << array[b] << std::endl;
}

efficient way to copy array with mask in c++

I have two arrays. One is "x" factor the size of the second one.
I need to copy from the first (bigger) array to the second (smaller) array only its x element.
Meaning 0,x,2x.
Each array sits as a block in the memory.
The array is of simple values.
I am currently doing it using a loop.
Is there any faster smarter way to do this?
Maybe with ostream?
Thanks!
You are doing something like this right?
#include <cstddef>
int main()
{
const std::size_t N = 20;
const std::size_t x = 5;
int input[N*x];
int output[N];
for(std::size_t i = 0; i < N; ++i)
output[i] = input[i*x];
}
well, I don't know any function that can do that, so I would use the for loop. This is fast.
EDIT: even faster solution (to avoid multiplications)(C++03 Version)
int* inputit = input;
int* outputit = output;
int* outputend = output+N;
while(outputit != outputend)
{
*outputit = *inputit;
++outputit;
inputit+=x;
}
if I get you right you want to copy every n-th element. the simplest solution would be
#include <iostream>
int main(int argc, char **argv) {
const int size[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
int out[5];
int *pout = out;
for (const int *i = &size[0]; i < &size[10]; i += 3) {
std::cout << *i << ", ";
*pout++ = *i;
if (pout > &out[4]) {
break;
}
}
std::cout << "\n";
for (const int *i = out; i < pout; i++) {
std::cout << *i << ", ";
}
std::cout << std::endl;
}
You can use copy_if and lambda in C++11:
copy_if(a.begin(), a.end(), b.end(), [&] (const int& i) -> bool
{ size_t index = &i - &a[0]; return index % x == 0; });
A test case would be:
#include <iostream>
#include <vector>
#include <algorithm> // std::copy_if
using namespace std;
int main()
{
std::vector<int> a;
a.push_back(0);
a.push_back(1);
a.push_back(2);
a.push_back(3);
a.push_back(4);
std::vector<int> b(3);
int x = 2;
std::copy_if(a.begin(), a.end(), b.begin(), [&] (const int& i) -> bool
{ size_t index = &i - &a[0]; return index % x == 0; });
for(int i=0; i<b.size(); i++)
{
std::cout<<" "<<b[i];
}
return 0;
}
Note that you need to use a C++11 compatible compiler (if gcc, with -std=c++11 option).
template<typename InIt, typename OutIt>
void copy_step_x(InIt first, InIt last, OutIt result, int x)
{
for(auto it = first; it != last; std::advance(it, x))
*result++ = *it;
}
int main()
{
std::array<int, 64> ar0;
std::array<int, 32> ar1;
copy_step_x(std::begin(ar0), std::end(ar0), std::begin(ar1), ar0.size() / ar1.size());
}
The proper and clean way of doing this is a loop like has been said before. A number of good answers here show you how to do that.
I do NOT recommend doing it in the following fashion, it depends on a lot of specific things, value range of X, size and value range of the variables and so on but for some you could do it like this:
for every 4 bytes:
tmp = copy a 32 bit variable from the array, this now contains the 4 new values
real_tmp = bitmask tmp to get the right variable of those 4
add it to the list
This only works if you want values <= 255 and X==4, but if you want something faster than a loop this is one way of doing it. This could be modified for 16bit, 32bit or 64bit values and every 2,3,4,5,6,7,8(64 bit) values but for X>8 this method will not work, or for values that are not allocated in a linear fashion. It won't work for classes either.
For this kind of optimization to be worth the hassle the code need to run often, I assume you've run a profiler to confirm that the old copy is a bottleneck before starting implementing something like this.
The following is an observation on how most CPU designs are unimaginative when it comes to this sort of thing.
On some OpenVPX you have the ability to DMA data from one processor to another. The one that I use has a pretty advanced DMA controller, and it can do this sort of thing for you.
For example, I could ask it to copy your big array to another CPU, but skipping over N elements of the array, just like you're trying to do. As if by magic the destination CPU would have the smaller array in its memory. I could also if I wanted perform matrix transformations, etc.
The nice thing is that it takes no CPU time at all to do this; it's all done by the DMA engine. My CPUs can then concentrate on harder sums instead of being tied down shuffling data around.
I think the Cell processor in the PS3 can do this sort of thing internally (I know it can DMA data around, I don't know if it will do the strip mining at the same time). Some DSP chips can do it too. But x86 doesn't do it, meaning us software programmers have to write ridiculous loops just moving data in simple patterns. Yawn.
I have written a multithreaded memcpy() in the past to do this sort of thing. The only way you're going to beat a for loop is to have several threads doing your for loop in several parallel chunks.
If you pick the right compiler (eg Intel's ICC or Sun/Oracles Sun Studio) they can be made to automatically parallelise your for loops on your behalf (so your source code doesn't change). That's probably the simplest way to beat your original for loop.

Is this legal use of a for loop?

For instance:
vector<int> something;
//imagine i add some elements to the vector here
int* pointy;
for (int i = 0; i < something.size(); pointy = &something[i++]) {
//do some work with pointy
}
it seems to work and saves me a line but is there any danger of weird bugs popping up down the line because of this?
This may not be legal, because pointy is unassigned on the first iteration. If the loop does not dereference pointy during the initial iteration, it may be OK, but it is not possible to tell without seeing the body of the loop.
Since you are using std::vector, using iterators would save you another line, because you wouldn't need to declare pointy. You would be able to determine the offset without i by subtracting something.begin() from the current iterator:
for (vector<int>::iterator iter = something.begin() ; iter != something.end() ; ++iter) {
cout << "Item at index " << (iter - something.begin()) << " is " << *iter << endl;
}
Yes, that is dangerous, as dasblinkenlight pointed out. But there's a simpler way to eliminate this kind of problems.
Write your code for simplicity and readability first. Compressing your loop into the smallest possible number of lines won't add anything in terms of performance, and even if it did, you shouldn't care as long as your profiler doesn't tell you that your loop is a bottleneck.
On the other hand, it will make your code harder to read and, possibly, more prone to bugs (as you've probably noticed).
In C++11, consider using a range-based for loop:
for (int& p : something)
{
// ...
}
In C++03, consider using std::for_each(), or a classical loop based on iterators:
for (std::vector<int>::iterator i = something.begin(); i != something.end(); ++i)
{
// use *i to refer to the current pointy
// use (i - something.begin()) to get its index
// ...
}
It is very dangerous, because i is not unsigned. It can blow up in some rare cases. :)
Is this safe really depends on what you are after.
pointy is going to be a pointer to an element in the vector.
This means that if you change the value of pointy, or what it points to be more specific, you are actually changing the content of the vector for that specific element.
As for me, I like to handle std::vector like this for larger Objects:
std::vector<int*> mynumbers;
//add elements here like this:
int somenumber = 5;
mynumbers.push_back(&somenumber);
for(int i=0;i<elemnts.size();i++)
{
cout << "Element Nr. " << i << ": " << *elements.at(i) << endl;
//modify like this:
*elements.at(i) = 0;
}
The pointer instead of the variable itself is because std::vector handles pointers faster than large objects itself, but for int, this doesn't really make much difference so you can also do it like this:
std::vector<int> mynumbers;
mynumbers.push_back(5);
int* pointy
for(int i=0;i<elemnts.size();i++)
{
pointy = &elements.at(i);
}
Works great for me!