If there are Vectors, then why are there Arrays? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have always asked this question to myself. I tried to find the answer in the internet but, I just couldn't find what I was really looking for. If the developers made Vectors which can be used easier (according to some people), then what was the use of Arrays(which are generally avoided, according to some people aswell)?

The elements stored in an std::array can be allocated on the stack as the size is known at compile time and elements of a std::vector will be allocated on the heap. This can make a huge performance difference. Or more general, an std::array does not need its own memory allocation but an std::vector always does.

In C++, array is used to refer to two distinct kinds of things. One is std::array. The other is the built-in array type you get from a declaration like this: int foo[10];. This defines an array of 10 integers, named foo.
The advice against using an array will (at least usually) refer to the built-in array types. I don't know of anybody who advises against using std::array (except for cases where somebody needs a different container such as std::vector instead).
It's pretty easy to advise using std::array over a built-in array type simply because std::array is designed to impose no overhead compared to a built-in array type. In addition, however, std::array provides the normal container interface for getting things like the first element of the array, the size of the array, or iterators to the beginning and end so it's easy to apply a standard algorithm to an std::array.
Of course, all of these can be done with built-in array types as well. The implementation of std::array doesn't contain any "magic"--it just provides a standard interface to things you could do on your own. At the same time, it does provide a standard interface, and normally imposes no overhead, so there's rarely a reason to do the job on your own.

Related

Use of studying different algorithm mechanism for sorting when STL does it crisply [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I mean, what is the use of studying different sorting algorithms when it can be done with the help of a single line in c++ using STL?
Is it just for the sake of knowing (or any other reason)?
It's comparable (i think) to knowing all of the different STL containers. Think about all the different options you have just too store objects, priority queue's, vectors, arrays, deques, stacks, maps, sets, etc... The list goes on. A naive programmer may simply use a std::vector for everything. I mean everyone is always saying such good things about std::vector, it manages it's own size, it's extremely fast at adding new elements, etc... The list goes on. But do you use std::vector for all your containers, i certainty hope not! The same logic apply's too knowing the various sorting algorithms, their are cases where the built in sorting mechanisms are simply inadequate, and you must not only know how to recognize when this situation occurs but be able too come up with a clean solution.
Just because the STL handles many operations (such as sorting) effectively it does not mean it will handle ALL situations effecively
Learning different ways to do things and the benefits/tradeofs they provide is often helpful.
For (an extreme) example; if you are sorting a container of at most 5 elements, then the lowly bubble sort may out-perform std::sort (which is most likely quicksort). So if this is something you do millions of times each second then you'd lose out with std::sort.
There is never (or at least "very rarely") a single " best" solution to a problem. So learning about the alternatives and the tradeoffs is valuable.

How do we make our C++ programs fix array index out of range by themselves? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to use some C++ code which is not written by me.
However, there are a lot of array index out of range in the code.
Moreover, the lengths of arrays may be not fixed; for example, they may be determined by the size of the input image.
I do not have enough budget to fix them manually, so I ask this question here.
I hope a[i] can be a[0] if i<0 and a[a.Length-1] if i>=a.Length, but I can keep the code a[i].
How do I make it?
You might want to try a wrapper class that is initialized with the original array and then uses an operator[] to behave as you need.
You could write a (templated) class that wrapped an array and overloaded the [] operator to perform bounds checked access to the underlying array. You could then use this class instead of normal C arrays.
How workable this will be will depend heavilly on how the application uses the array. If the array is a gloabl variable or part of a structure/class and is only ever accessed by [] then it will work great but if the array is passed arround by "degrading" to a pointer (and note that array parameters are really pointer parameters) then more work will be needed, changing parameter types and possiblly creating a seperate "checked array reference" class to be used in addition to your "checked array".

why C++ STL have five different iterators? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
why C++ STL have five iterators? Only random iterator could be sufficient to operate on all the containers. Any specific reason?
Sorry..It is my mistake..I did't mean random iterator...I was supposed to ask about bidirectional iterator...So don't you think that only bidirectional iterator can cover the functionality of input, output, forward iterators? So is there any specific reason to introduce (input, output, forward) iterators concept? Thanks. –
Containers aren't the the only interesting sequence. Also, std::list<...> and the associative containers don't have an efficient method for random access although they are containers. std::forward_list<...> can walk in just one direction. When sequences are sources or drains, they can often just traversed once. Oh, look! I actually gave reasons for all five categories!
Note that the "STL iterators" are not classes but concepts, i.e., requirements for operations and associated types needed to meet the respective iterator concept. The basic idea is that algorithm interfaces are specified in terms of the weakest concepts yielding an efficient implementation. When stronger concepts are provided to the algorithms they may be able to apply some optimizations. This approach yields flexible and efficient algorithms operating on all kinds of different sequences.
To get an idea why check this page
A random access iterator cannot always work. A simple example: If you're streaming data via the network, you cannot start again from the beginning. There are more reasons, but simply read the page.

Storing large integers in C++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to store and operate on very large integers, what is the best way to do this without using pre-built libraries?
Based on what another StackOverflow user stated:
The std::string object will be copied onto the stack, but the string body will not - it will be allocated on heap. The actual limitation will depend on the system and program memory usage and can be something like from ten million to one billion charactes on a 32-bit system.
I just thought of two simple ways, both of which require me to write my own class. The first is to use vectors and strings, and the second is to break down a large integer into separate blocks in integer arrays and add up the total.
The max.size() of a string on my computer is 4294967291.
I have decided to write my own class.
Thanks for the help: C++ char vector addition
EDIT:
Working on it: https://github.com/Jyang772/Large_Number_Collider
If depends on the usage of this integer, but to keep the semantic of numbers and make your class coding easier, i'd suggest to use a vector of long integers.
Using std::string will be far more complicated for code design and maintenance.
You will have to redefine every operators and take into account the propagation of computations from one chunk of you number to an other.
The usual way is to use an array (vector, etc) of ints (longs, etc), rather than a string.
Start by looking at existing large integer classes, even if you can't use one verbatim for your homework.
When we faced similar problems on contests, we used vectors, each cell containing one digit of the number. This way you can store an immensely large number.

What are allocators and when is their use necessary? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
While reading books on C++ and the standard library, I see frequent references to allocators.
For example, Nicolai Josuttis's The C++ Standard Library discusses them in detail in the last chapter, and both items 10 ("be aware of allocators' conventions & restrictions") and 11 ("understand the legitimate uses of custom allocators") in Scott Meyers's Effective STL are about their use.
My question is, how do allocators represent a special memory model? Is the default STL memory management not enough? When should allocators be used instead?
If possible, please explain with a simple memory model example.
An allocator abstracts allocating raw memory, and constructing/destroying objects in that memory.
In most cases, the default Allocator is perfectly fine. In some cases, however, you can increase efficiency by replacing it with something else. The classic example is when you need/want to allocate a large number of very small objects. Consider, for example, a vector of strings that might each be only a dozen bytes or so. The normal allocator uses operator new, which might impose pretty high overhead for such small objects. Creating a custom allocator that allocates a larger chunk of memory, then sub-divides it as needed can save quite a bit of both memory and time.