I have an array called quint8 block[16] which I:
initialize to zero with block[16] = { 0 }
fill it with some data
then pass it to a method/function as an argument like this bool fill(const quint8 *data)
In this fill function I want to see if the array is filled completely or if it contains null elements that were not filled.
Is it correct to perform a check this way? if(!data[i])? I have seen similar Q&A on this forum but all of them use \0 or NULL and I've heard this is not a good style for doing so and that has confused me.
Integer types do not have a "null" concept in C++.
Two possible ideas are:
1) Use some specific integer value to mean "not set". If 0 is valid for your data, then obviously it cannot be used for this. Maybe some other value would work, such as the maximum for the type (which looks like it would be 0xff in this case). But if all possible values are valid for your data, then this idea won't work.
2) Use some other piece of data to track which values have been set. One way would be an array of bools, each corresponding to a data element. Another way would be a simple count of how many elements have been set to a value (obviously this only works if your data is filled sequentially without any gaps).
If your question is whether you can distinguish between an element of an array that has been assigned the value zero, versus an element that has not been assigned to but was initialized with the value zero, then the answer is: you cannot.
You will have to find some other way to accomplish what you are trying to accomplish. It's hard to offer specific suggestions because I can't see the broader picture of what you're trying to do.
It depends on whether type quint8 has a bool conversion operator or some other conversion operaror for example to an integer type.
If quint8 is some integral type (some typedef for an integral type) then there is no problem. This definition
quint8 block[16] = { 0 };
initializes all elements of the array by zero.
Take into account that in general case the task can be done with algorithms std::find, std::find_if or std::any_of declared in header <algorithm>
Related
I am learnig c++. I see that you usually use integers to store a value, and if we want to represent that there is no value, then we use -1. For instance, the return from searching the index of a string in a vector that doesn't contain the string would be -1. In Javascript is easy: you just declare it false.
I often run into two problems:
If I intend only to use positive values, I am wasting all the available range that is allocated for the int. All the numbers from -2 to -32768
If I intend to use negative values, this approach is useless.
I know that javascript is a different world, but, for instance, the range of such a datatype could be from -1 to 65534 in C++. So why doesn't C++ have a data type that can be a number or false? Or is there a common programming thechnique that I am overlooking?
In C++17 there is an optional type, which is called std::optional.
But I think you miss one thing. C++ is on the one hand an extremely modern programming language, offering many of the ideas other modern languages offer. On the other hand it was and is still designed with efficiency in mind, both efficiency of the memory footprint and efficiency regarding speed.
For instance, the return from searching the index of a string in a
vector that doesn't contain the string would be -1. In Javascript is
easy: you just declare it false. I often run into two problems:
If I intend only to use positive values, I am wasting all the available
range that is allocated for the int. All the numbers from -2 to -32768
That said:
the std::vector find returns end() and not -1 when an index is not found.
std::basic_string::find does not return an int value but size_t, which is matter of fact unsigned. Yes, -1 is the literal used, but for an unsigned type -1 is the maximum representable value. You donĀ“t loose anything except the exactly one value, the maximum value. -1 one is just the the most convenient portable way to express the maximum of size_t.
In many implementations the maximum value of size_t is 18446744073709551615 and most c++ developers prefer not being able to search strings longer than 18446744073709551614 (which is far beond realistic anyway) to fiddling with the problems of efficient optional types or spending extra bytes for flags.
Even when size_t maxes to 65535 the probability that a size of 65534 would be insufficient but 65535 would do is extremely close to zero.
First of all, function return types in C++ are defined during compilation which is why you can't do something like this
// ...
if (found)
return index;
else
return false;
There are several ways to work around this. You have to choose according to your application. The first one to consider, is the one that is consistent with the STL library. Containers offer iterators to their elements, and find returns an iterator to the requested element. If it is not found, it returns an iterator to the past-the-end element of the container. For example, one can write
// on some container that offers the standard interface
auto it = container.find(value);
if (it == container.end()) {
// not found
}
The above is the cleanest solution. This is how std::vector and every other container in the STL signals a not found value. The major upside of writing code like this is that you can substitute the container for any other and the code would still work. Alternatively, if you design your own container with a compatible interface, you can plug it into existing code seamlessly.
In other situations, however, this might not work. For example, you might only need the actual index and it might be hard to get from an iterator on a non-contiguous container. In that case, you can use std::optional. It either holds an object or is empty and offers a bool conversion for easy checking.
std::optional<int> my_find(T value) {
// ...
if (found) // pseudo-condition, depends on the rest of the code
return std::optional<int>(index); // explicit, could be more compact
else
return std::optional<int>(); // default optional is empty
}
// elsewhere
auto i = my_find();
if (!i) {
// not found
}
Note that the above, will add space overhead to track whether the object exists. If that is unacceptable for any reason, you can take the sentinel value idea and create a compact optional object where the sentinel value is used internally to signal non-existence of value, and you offer an interface to check for that and maybe an exception to throw if the user requests the value while the object is empty. Something like
template <typename T, T sentinel_value>
class CompactOptional {
private:
T value;
public:
CompactOptional(value = sentinel_value): value(value) {}
operator bool () { return value != sentinel_value; }
// getter and setter according to your needs
}
You have to decide what happens when you try to get a non-existent value and then that is it. The sentinel value is whatever you can afford to not use, possible the maximum value of your integral type if using unsigned.
I would like to use -1 to indicate a size that has not yet been computed:
std::vector<std::size_t> sizes(nResults, -1);
and I was wondering why isn't there a more expressive way:
std::vector<std::size_t> sizes(nResults, std::vector<std::size_t>::npos);
It basically comes down to a fairly simple fact: std::string includes searching capability, and that leads to a requirement for telling the caller that a search failed. std::string::npos fulfills that requirement.
std::vector doesn't have any searching capability of its own, so it has no need for telling a caller that a search has failed. Therefore, it has no need for an equivalent of std::string::npos.
The standard algorithms do include searching, so they do need to be able to tell a caller that a search has failed. They work with iterators, not directly with collections, so they use a special iterator (one that should never be dereferenced) for this purpose. As it happens, std::vector::end() returns an iterator suitable for the purpose, so that's used--but this is more or less incidental. It would be done without (for example) any direct involvement by std::vector at all.
From cppreference:
std::size_t is the unsigned integer type of the result of the sizeof
operator as well as the sizeof operator and the alignof operator
(since C++11)....
...std::size_t can store the maximum size of a
theoretically possible object of any type...
size_t is unsigned, and can't represent -1. In reality if you were to attempt to set your sizes to -1, you would actually be setting them to the maximum value representable by a size_t.
Therefore you should not use size_t to represent values which include the possible size of a type in addition to a value indicating that no size has been computed, because any value outside the set of possible sizes can not be represented by a size_t.
You should use a different type which is capable of expressing all of the possible values you wish to represent. Here is one possibility:
struct possibly_computed_size_type
{
size_t size;
bool is_computed;
};
Of course, you'll probably want a more expressive solution than this, but the point is that at least possibly_computed_size_type is capable of storing all of the possible values we wish to express.
One possibility is to use an optional type. An optional type can represent the range of values of a type, and an additional value meaning 'the object has no value'. The boost library provides such a type.
The standard library also provides an optional type as an experimental feature. Here is an example I created using this type:
http://ideone.com/4J0yfe
I have this very large array, called grid. When I declare the array as below, every value in the array should be set to 0 according to the array constructor for integers
int testGrid[226][118];
However when I iterate through the entire array, I seem to get 0s for the majority of the array, however towards the lower part of the array I get arbitrary trash. The solution it is seems is to iterate over the array and manually set each value to 0. Is there a better way to do this?
You could do:
int testGrid[226][118] = {};
which will initialize your entries to 0.
Please see this C answer, which may come in handy for C++ too.
By the way, since this is C++, consider using an std::array, or an std::vector.
I want to know upto what index my array is filled. I know one method in which I will maintain a temporary variable inside the loop and will keep it updating which will at last determine the size.
I want to know that beside this method is their any other way to do this task? Better be O(1)(if possible) or anything better than O(n).
There is no generic way to do that as all elements of an array always contain a value.
Couple common ways to handle that:
keep track of "valid" elements yourself as you suggested in the post.
have sentinel element that marks "missing" value and check each element for it - first element with such value will mark "end of filled array". For reference types you can use null, for other types sometimes there is specific value that rarely used and can be treated as "missing" - i.e. max value of integer types.
The second approach is the way C-style strings are implemented - it is array of characters up to 0 character - so you can always compute length of the string even if it is stored in longer array of chars.
will this do?
size_t size_of_array = sizeof(array)/sizeof(array[0]);
something like that , and do correct the syntax :)
I'm using a type Id which is defined in another part of the code I'm using:
typedef int Id;
Now I am provided many objects, each of which comes with such an Id, and I would like to use the Id as index into a std::vector that stores these objects. It may look something like this:
std::vector<SomeObj*> vec(size);
std::pair<Id, SomeObj*> p = GetNext();
vec[p.first] = p.second;
The problem is that std::vector uses its own type to index its elements: std::vector::size_type (why isn't that templated?).
So strictly, it would be better to use std::map<Id, SomObj*>, but that would be less efficient, and an array is what I really need here (I know that the indices of all the objects are contiguous and start with 0). Another problem is that the typedef int Id might change to typedef long int Id or similar in the future ... (it is part of my own code though, so I control it, but ideally I should be allowed to change the typedef at some points; that's what a typedef is for).
How would you deal with this? Maybe use unordered_map<Id, SomeObj*>, where the hash function directly uses the Id as hash key? Would that be less memory-efficient? (I don't completely understand how unordered_map allocates its space given that the range of the hash function is unknown in advance?)
You can pass any integral type as an index for std::vector. If it doesn't match std::vector<T>::size_type (which is typically unsigned long) then the value will be implicitly converted.
why isn't that templated?
Because standard containers are implemented to use the largest unsigned type that they reasonably can. If size_type is unsigned int in your implementation then that's for a reason, and whatever the reason is that prevented the implementers using a larger type, would still exist if you asked for something else[*]. Also, for your particular example, size types have to be unsigned, and you want to use a signed type, so that's another change that would be needed to support what you want to do.
In practice, size_type for a standard container is (almost?) always size_t. So if you asked for a larger vector, you couldn't have one, because a vector is backed by contiguous storage. The vector would be unable to allocate an array bigger than size_t bytes.
To use your Id as a vector index, you can either rely on implicit conversions, or you can explicitly convert (and, perhaps, explicitly bounds check) in order to make it absolutely clear what you are doing. You could also use asserts to ensure that Id is no bigger than size_type. Something like this, although a static assert would probably be better:
assert(std::numeric_limits<Id>::max() <= std::numeric_limits<std::vector<SomeObj*>::size_type>::max());
A map<Id, SomeObj*> would be a good option if the Id values in use are sparse. If the only valid Ids are 1 and 400,000,000, then a vector would be rather wasteful of memory.
If it makes you feel any more comfortable, remember that the literal 0 has type int, not vector<SomeObj*>::size_type. Most people have no qualms about writing vec[0]: indeed it's used in the standard.
[*] Even if that reason is just, "the implementers think that 4 billion elements is enough for anyone".
Write your own container wrapper that takes Id as the index type. Use map or unordered_map internally to implement the container. Program against this wrapper. If it turns out that this implementation is too slow, switch to vector internally and convert you Id index to vector::size_type (also internally, of course).
This is the cleanest approach. But really, vector::size_type will be some very large unsigned integer type so a conversion from Id to vector::size_type will always be safe (but not the other way round!).
The problem is, Id is an int (more accurately, signed int), can be negative. If the Id was an unsigned type, e.g. typedef unsigned int Id;, there is no problem.
If my understanding correct so far, then I don't get the idea why would someone want to use a negative number as an index to vector (or array). What am I missing?