Which of the control structure take less time complexity? - c++

In programming, we are using many of the control structure to iterate. So which one is the best way to iterate with with respect to time complexity?
int a[500],n=10;
for(int i=0;i<=n;i++)
{
cin>>a[i]
}
How can I change this iteration achieve less complexity?
Which one is the best way to use for iteration:
for
while
do while

for, while and do while (and also goto) is really the same thing. No matter what loop you create with one of these loops, you can always create an equivalent loop with the same time complexity with all the others. (Almost true. The only exception is that the do-while loop has to be run at least once.)
For example, your loop
for(int i=0;i<=n;i++) {
...
}
corresponds to
int i=0;
while(i<=n) {
...
i++;
}
and
int i=0;
START:
...
i++;
if(i<=n)
goto START;
You could make an equivalent do-while too, but it does not really make sense.
Which one you should choose is more a matter of design than performance. In general:
for - When you know the number of iterations before the loop starts
while - When you don't know
do-while - When you don't know, but at least once
goto - Never (Some exceptions exists)
A benefit with for loops is that you can declare variables that only exists within the loop scope and also can be used for the loop condition.

this will iterate from i=0 to i=10, so 11 iterations in total. The time complexity for any basic loop is O(N).
All the above options mentioned(for-loop, while-loop, do-while-loop) have the same time complexity.

As always, you should use caching techniques for such purposes. Because if you are interested, for, while keywords in fact do the same thing in almost the same instructions (both are expressed in jmp instruction). Again, silver bullet is not existed. By depending of nature of your program the only way to optimize looping is using caching or parallelization if it can fit yoyr goals. Maybe there is constant values which created only once and used multiple times? Then cache result if it is possible.This can reduce time to 'constant'. Or do it in parallel way. But I think it is not proper way, many things compiler will do for you. Better concentrate on your architecture of program

The use of for, while, do-while, for-each, etc could be consider a classic example of syntactic sugar. They're just ways to do the same thing but in certain cases some control structures can be "sweeter" than others. For instance, if you want to keep iterating iff (if and just if) a boolean keeps true (for instance using an Iterator), a while look much better than a for (well that's a subjective comment), but the complexity will be the same.
while (iter.next()) {
// Do something
}
for (;iter.next();) {
// Do something
}
In terms of temporal complexity they're iterating the same amount of elements, in your example N=10 therefore O(N). How can you make it better? It depends, if you have to iterate all over the array, the Big O best case will always be O(N). Now in terms of ~N, that statement is not always true. For instance if you iterate just half of the array having 2 starting points, one at i=0 and the other one at i=n-1, you can achieve a temporal complexity ~N/2
for(int i=0;i<n/2;i++)
{
int x = a[i];
int y = a[n-i-1];
// Do something with those values
}
For big O is the same complexity, given that ~N/2 -> O(N) but if you have a set of 10k records, just read 5k is an achievement! In this last case what I'm trying to say is that if you want to improve your code complexity you need to start checking better data structures and algorithms (this is just a simple silly example, there are beautiful algorithms and data structures for multiple cases). Just remember: for or while are not the big prOblems!

Related

Which is more efficient, set or vector

I have a bit of an issue, I was recently told that for an un-ordered value for input, a bunch of random values, lets say 1 Million of them, that using a set would be more efficient than using a vector, and then sorting said vector with the basic sort algorithm function, but when I used them, and checked them through the time function, in the terminal, and valgrind, it showed that both time complexity, and space usage were faster for the vector, even with the addition of the sort function being called. The person who gave me the advice to use the set is a lot more experienced than me in the C++ language, but I always have to test things out myself prior to taking peoples advice. The test codes follow.
For Set
std::set<int> testSet;
for(int i(0); i<= 1000000; ++i)
testSet.insert(-i);
For Vector
std::vector<int> testVector;
for(int i(0); i<= 1000000; ++i)
testVector.push_back(i * -1);
std::sort(testVector.begin(), testVector.end());
I know that these are not random variables, it wouldn't be fair since set does not allow duplicates, and vector does sothey would be different sizes for this basic function point. Can anyone clarify why the set should be used, sans the point of the no duplicates one.
I did not do any tests with the unordered set either. Not too sure of the differences between the two given points.
This is too vague and ignores/misses out several crucial factors. If your friend said precisely this, then your friend (regardless of his or her experience) was wrong. More likely you are somewhat misinterpreting their words and reading into them a simplified version of matters.
When you want a sorted final product, the sorting is "amortized" when you insert into a set, because you get little bits of sorting action each time. If you will be inserting periodically and many times, then that spreading-out of the workload may be what you want. The total, when added up, may still be more than for a vector (consider the occasional rebalancing and so forth; your vector just needs to be moved to a larger block of memory once in a while), but you've spread it out so as not to noticeably slow down some individual other part of your program.
But if you're just dumping all the elements into a vector and sorting straight away, not only is there less work for the container & algorithm to do but you probably don't mind it taking a noticeable amount of time.
You haven't really stated your use case in any detail so I won't pretend to give specifics here, but the only possible answer to your question as posed is both "it depends" and "the question is fundamentally somewhat meaningless"; you cannot just take two data structures and sorting methodologies, and ask "which is more efficient?" without a use case. You have, however, correctly measured the time and space requirements and if you've done that against your real-world use case then, well, you have your answer don't you?

Most efficient way to iterate through Lists and Forward List

In the one of the question on the studio for my class, it asks to create iterators for 2 different object types (list, forwardlist) that are pointing 2 past the beginning of each type in the MOST EFFICIENT MANNER. I'm not sure what constitutes the most efficient manner.
Obviously I could use a for loop to move the iterator twice, but I'm not sure that's what you are looking for. I also tried using the next function, but it did not appear to work. Is there a better way than the for loop?
auto it = list.begin()
for (int i = 0; i< 2; i++){
++iterator;
}
Thanks so much for your time!
Efficiency - for this context - can mean efficient use of memory, cpu cycles, non-duplication of efforts, and more.
The most important aspect of your exercise is to have some kind of understanding of the code that will be used. That does not just mean the code you write, but the code contained in the libraries you use. Once you are comfortable with all of the code in use, then you can begin to analyze it.
Some ideas:
read about the iterator and collection libraries that you use for lists. Also read about how they perform 'begin()' and 'next()'. If there are no articles on this, you may have to have a look at the source code.
can the int loop index be swapped for a byte? (splitting hairs).

boost multi_index_container and slow operator++

It is follow-up question for this MIC question. When adding items to the vector of reference wrappers I spend about 80% of time inside ++ operator whatever iterating approach I choose.
The query works as following
VersionView getVersionData(int subdeliveryGroupId, int retargetingId,
const std::wstring &flightName) const {
VersionView versions;
for (auto i = 0; i < 3; ++i) {
for (auto j = 0; j < 3; ++j) {
versions.insert(m_data.get<mvKey>().equal_range(boost::make_tuple(subdeliveryGroupId + i, retargetingId + j,
flightName)));
}
}
return versions;
}
I've tried following ways to fill the reference wrapper
template <typename InputRange> void insert(const InputRange &rng) {
// 1) base::insert(end(), rng.first, rng.second); // 12ms
// 2) std::copy(rng.first, rng.second, std::back_inserter(*this)); // 6ms
/* 3) size_t start = size(); // 12ms
auto tmp = std::reference_wrapper<const
VersionData>(VersionData(0,0,L""));
resize(start + boost::size(rng), tmp);
auto beg = rng.first;
for (;beg != rng.second; ++beg, ++start)
{
this->operator[](start) = std::reference_wrapper<const VersionData>(*beg);
}
*/
std::copy(rng.first, rng.second, std::back_inserter(*this));
}
Whatever I do I pay for operator ++ or the size method which just increments the iterator - meaning I'm still stuck in ++. So the question is if there is a way to iterate result ranges faster. If there is no such a way is it worth to try and go down the implementation of equal_range adding new argument which holds reference to the container of reference_wrapper which will be filled with results instead of creating range?
EDIT 1: sample code
http://coliru.stacked-crooked.com/a/8b82857d302e4a06/
Due to this bug it will not compile on Coliru
EDIT 2: Call tree, with time spent in operator ++
EDIT 3: Some concrete stuff. First of all I didnt started this thread just because the operator++ takes too much time in overall execution time and I dont like it just "because" but at this very moment it is the major bottleneck in our performance tests. Each request usually processed in hundreds of microseconds, request similar to this one (they are somewhat more complex) are processed ~1000-1500 micro and it is still acceptable. The original problem was that once the number of items in datastructure grows to hundreds of thousands the performance deteriorates to something like 20 milliseconds. Now after switching to MIC (which drastically improved the code readability, maintainability and overall elegance) I can reach something like 13 milliseconds per request of which 80%-90% spent in operator++. Now the question if this could be improved somehow or should I look for some tar and feathers for me? :)
The fact that 80% of getVersionData´s execution time is spent in operator++ is not indicative of any performance problem per se --at most, it tells you that equal_range and std::reference_wrapper insertion are faster in comparison. Put another way, when you profile some piece of code you will typically find locations where the most time is spent, but whether this is a problem or not depends on the required overall performance.
#kreuzerkrieg, your sample code does not exercise any kind of insertion into a vector of std::reference_wrappers! Instead, you're projecting the result of equal_range into a boost::any_range, which is expected to be fairly slow at iteration --basically, increment ops resolve to virtual calls.
So, unless I'm seriously missing something here, the sample code performance or lack thereof does not have anything to do with whatever your problem is in real code (assuming VersionView, of which you don't show the code, is not using boost::any_range).
That said, if you can afford replacing your ordered indices with equivalent hashed indices, iteration will probably be faster, but this is is an utter shot in the dark given you're not showing the real stuff.
I think that you're measuring the wrong things entirely. When I scale up from 3x3x11111 to 10x10x111111 (so 111x as many items in the index), it still runs in 290ms.
And populating the stuff takes orders of magnitude more time. Even deallocating the container appears to take more time.
What Doesn't Matter?
I've contributed a version with some trade offs, which mainly show that there's no sense in tweaking things: View On Coliru
there's a switch to avoid the any_range (it doesn't make sense using that if you care for performance)
there's a switch to tweak the flyweight:
#define USE_FLYWEIGHT 0 // 0: none 1: full 2: no tracking 3: no tracking no locking
again, it merely shows you could easily do without, and should consider doing so unless you need the memory optimization for the string (?). If so, consider using the OPTIMIZE_ATOMS approach:
the OPTIMIZE_ATOMS basically does fly weight for wstring there. Since all the strings are repeated here it will be mighty storage efficient (although the implementation is quick and dirty and should be improved). The idea is much better applied here: How to improve performance of boost interval_map lookups
Here's some rudimentary timings:
As you can see, basically nothing actually matters for query/iteration performance
Any Iterators: Doe They Matter?
It might be the culprit on your compiler. On my compile (gcc 4.8.2) it wasn't anything big, but see the disassembly of the accumulate loop without the any iterator:
As you can see from the sections I've highlighted, there doesn't seem to be much fat from the algorithm, the lambda nor from the iterator traversal. Now with the any_iterator the situation is much less clear, and if your compile optimizes less well, I can imagine it failing to inline elementary operations making iteration slow. (Just guessing a little now)
Ok, so the solution I applied is as following:
in addition to the odered_non_unique index (the 'byKey') I've added random_access index. When the data is loaded I rearrange the random index with m_data.get.begin(). Then when the MIC is queried for the data I just do boost::equal_range on the random index with custom predicate which emulates the same logic that was applied in ordering of 'byKey' index. That's it, it gave me fast 'size()' (O(1), as I understand) and fast traversal.
Now I'm ready for your rotten tomatoes :)
EDIT 1:
of course I've changed the any_range from bidirectional traversal tag to the random access one

More efficient data structure

I'm developing a project and I need to do a lot of comparisons between objects and insertions in lists.
Basically I have a object of type Board and I do the following:
if(!(seenStates.contains(children[i])))
{
statesToExpand.addToListOrderly(children[i]);
seenStates.insertHead(children[i]);
}
where statesToExpand and seenStates are two lists that I defined this way:
typedef struct t_Node
{
Board *board;
int distanceToGoal;
t_Node *next;
} m_Node;
typedef m_Node* m_List;
class ListOfStates {
...
Everything works fine but I did some profiling and discovered that almost 99% of the time is spent in operating on these lists, since I have to expand, compare, insert, etc. almost 20000 states.
My question is: is there a more efficient data structure that I could use in order to reduce the execution time of that portion of code?
Update
So I tried using std::vector and it is a bit worse (15 seconds instead of 13 with my old list). Probably I'm doing something wrong... With some more profiling I discovered that approximately 13.5 seconds are spent searching for an element in a vector. This is the code I am using:
bool Game::vectorContains(Board &b)
{
clock_t stop;
clock_t start = clock();
if(seenStates.size() == 0)
{
stop = clock();
clock_counter += (stop-start);
return false;
}
for(vector<m__Node>::iterator it = seenStates.begin(); it != seenStates.end(); it++)
{
if( /* condition */ )
{
stop = clock();
clock_counter += (stop - start);
return true;
}
}
stop = clock();
clock_counter += (stop - start);
return false;
}
Can I do something better here or should I move on to another data structure (maybe an unordered_set as suggested below)?
One more update
I tried the exact same code in release mode and the whole algorithm executes in just 1.2 seconds.
I didn't know there could be such a big difference between Debug and Release. I know that Release does some optimization but this is some difference!
This part:
if(!(seenStates.contains(children[i])))
for a linked list is going to be very slow. While the algorithmic time is O(n), same as it would be for a std::vector<Node>, the memory that you're walking over is going to be all over the place... so you're going to incur lots of cache misses as your container gets larger. After a while, your time is just going to be dominated by those cache misses. So std::vector will likely perform much better.
That said, if you're doing a lot of find()-type operations, you should consider using a container that is setup to do find very quickly... maybe a std::unordered_set?
Using a list ends up with O(n) time to search for elements. You could consider data-structures with more effiecient lookßup, e.g. std::map, std::unordered_map, a sorted vector, other tree-structures. There many data-structures. Which one is best depends on your algorithm design.
Indeed you don't want to use a linked list in your case. Looking for a specific value (ie contains()) is very slow in a linked list, O(n).
Thus using an array list (for example std::vector) or a binary search tree would be smarter, complexity of contains() would become on average O(log n).
However if you are worried about expanding your array list very often, you might make it take a lot of space when you create it (for example 20 000 elements).
Don't forget to consider using two different data structures for your two lists.
If I understand it correctly, your data structure resembles a singly linked list. So, instead of usong your own implementation, you can try to work with a
std::slist<Board*>
or probably better with a
std::slist<std::unique_ptr<Board> >
If you further also need the reference to the previous element, then use a standard std::list. Both will give you constant insertion, but only linear lookup (at least if you don't know where to search).
Alternatively, you can consider using a std::map<std::unique_ptr<Board> > which will give you logarithmic insertion and lookup, but without further effort you lose the information on the successor.
EDIT: std::vector seems no good choise for your kind of requirements. As far as I understood, you need fast search and fast insertion. Both are O(n) for a vector. Use a std::map instead, where both are O(log n). [But note that using the latter doesn't mean you will directly get faster execution times, as that depends on the number of elements]

Is this a valid way (good style) of doing recursion

Apart from this code being horrible inefficient, is this way I´m writing recursive function here considered "good style". Like for example what I am doing creating a wrapper then passing it the int mid and a counter int count.
What this code does is getting values from the array then see if that combined with blockIndex is greater than the mid. So, appart from being inefficient would I get a job writing recursive functions like this?
int NumCriticalVotes :: CountCriticalVotesWrapper(Vector<int> & blocks, int blockIndex)
{
int indexValue = blocks.get(blockIndex);
blocks.remove(blockIndex);
int mid = 9;
return CountCriticalVotes(blocks, indexValue, mid, 0);
}
int NumCriticalVotes :: CountCriticalVotes(Vector<int> & blocks, int blockIndex, int mid, int counter)
{
if (blocks.isEmpty())
{
return counter;
}
if (blockIndex + blocks.get(0) >= mid)
{
counter += 1;
}
Vector<int> rest = blocks;
rest.remove(0);
return CountCriticalVotes(rest, blockIndex, mid, counter);
}
This is valid to the extent that it'll work for sufficiently small collections.
It is, however, quite inefficient -- for each recursive call you're creating a copy of the entire uncounted part of the Vector. So, if you count a vector containing, say, 1000 items, you'll first create a Vector of 999 items, then another of 998 items, then another of 997, and so on, all the way down to 0 items.
This would be pretty wasteful by itself, but seems to even get worse. You're then removing an item from your Vector -- but you're removing the first item. Assuming your Vector is something like std::vector, removing the last item takes constant time but removing the first item takes linear time -- i.e., to remove he first item, each item after that is shifted "forward" into the vacated spot.
This means that instead of taking constant space and linear time, your algorithm is quadratic in both space and time. Unless the collection involved is extremely small, it's going to be quite wasteful.
Instead of creating an entire new Vector for each call, I'd just pass around offsets into the existing Vector. This will avoid both copying and removing items, so it's pretty trivial to make it linear in both time and space (which is still well short of optimum, but at least not nearly as bad as quadratic).
To reduce the space used still further, treat the array as two halves. Count each half separately, then add together the results. This will reduce recursion depth to logarithmic instead of linear, which is generally quite substantial (e.g., for a 1000 items, it's a depth of about 10 instead of about a 1000. For a million items, the depth goes up to about 20 instead of a million).
Without know exactly what you are trying to accomplish, this is a very tough question to answer. The way I see recursion, or coding in general, is does it satisfy the following three requirements.
Does it accomplish all desired functionality?
Is it error resilient? Meaning it will not break when passed invalid inputs, or edge cases.
Does it accomplish its goal in sufficient time?
I think you are worried about number 3, and I can say that the time should fit the problem. For example, if you searching through 2 huge lists, O(n^2) is probably not acceptable. However, say you are searching through 2 small sets O(n^2) is probably sufficiently fast.
What I can say is to try timing different implementations of your algorithm on test cases. Just because your solution is recursive doesn't mean that it will always be faster than a "brute force" implementation. (This of course depends specifically on the case).
To answer your question, as far as recursion goes, this sample looks fine. However, will you get a job writing code like this? I don't know how well does this satisfy the other two coding requirements?
Very subjective question. The tail recursion is nice (in my book) but I'd balance that against creating a new vector on every call, which makes a linear algorithm quadratic. Independent of recursion, that's a big no-no, particularly as it is easily avoidable.
A few comments about what the code is intended to accomplish would also be helpful, although I suppose in context that would be less problematic.
The issues with your solution is your passing the count around. Stop pass the count and use the stack to keep track of it. The other issue is I'm not sure what your second condition is suppose to do.
int NumCriticalVotes :: CountCriticalVotesWrapper(Vector<int> & blocks, int blockIndex)
{
int indexValue = blocks.get(blockIndex);
blocks.remove(blockIndex);
int mid = 9;
return CountCriticalVotes(blocks, indexValue, mid);
}
int NumCriticalVotes :: CountCriticalVotes(Vector<int> & blocks, int blockIndex, int mid)
{
if (blocks.isEmpty())
{
return 0;
}
if (/*Not sure what the condition is*/)
{
return 1 + CountCriticalVotes(blocks, blockIndex, mid);
}
return CountCriticalVotes(blocks, blockIndex, mid);
}
In C++, traversing lists of arbitrary length using recursion is never a good practice. It's not just about performance. The standard doesn't mandate tail call optimization, so you have a risk of stack overflow if you can't guarantee that the list has some limited size.
Sure, the recursion depth has to be several hundred thousand with typical stack sizes, but it's hard to know when designing a program what kind of inputs it must be able to handle in the future. The problem might come back to haunt you much later.