how do i define 24 bit array in c++? (variable declaration)
There is no 24-bit variable type in C++.
You can use a bitpacked struct:
struct ThreeBytes {
uint32_t value:24;
};
But it is not guaranteed that sizeof ThreeBytes == 3.
You can also just use uint32_t or sint32_t, depending on what you need.
Another choice is to use std::bitset:
typedef std::bitset<24> ThreeBytes;
Then make an array out of that:
ThreeBytes *myArray = new ThreeBytes[10];
Of course, if you really just need "three bytes", you can make an array of arrays:
typedef uint8_t ThreeBytes[3];
Note that uint8_t and friends are non-standard, and are used simply for clarification.
An unsigned byte array of 3 bytes is 24 bits. Depending on how you are planning to use it, it could do.
unsigned char arrayname[3]
As #GMan points out you should be aware that it's not 100% of all systems that has 8 bits chars.
If you intend to perform bitwise operations on them, then simply use an integral type that has at least 24 bits. An int is 32 bits on most platforms, so an int may be suitable for this purpose.
EDIT: Since you actually wanted an array of 24 bit variables, the most straightforward way to do this is to create an array of ints or longs (as long as it's an integral data type that contains at least 24 bits) and treat each element as though it was 24 bits.
Depending on your purpose (for example, if you are concerned that using a 32bit type might waste too much memory), you might also consider creating an array of bytes with three times the length.
I used to do that a lot for storing RGB images. To access the n'th element, you would have to multiply by three and then add zero, one or two depending on which "channel" out of the element you wanted. Of course, if you want to access all 24 bits as one integer, this approach requires some additional arithmetics.
So simply unsigned char myArray[ELEMENTS * 3]; where ELEMENTS is the number of 24bit elements that you want.
Use bitset or bitvector if they are supported on your platform. (They are :) )
std::vector<std::bitset<24> > myArray;
Related
I have a question really similar to this:
Building a 32-bit float out of its 4 composite bytes.
Specifically I have an array of unsigned char composed by 8 elements:
unsigned char c[8] = {0b01001000, 0b11100001, 0b00100110, 0b01000001, 0b01111011,0b00010100, 0b10000110, 0b01000000}
This, with a little endianness convention corresponds to two floats, namely { 10.4300f, 4.19000f }.
I know that I could obtain the latter with:
float f[2];
memcpy(&f, &c, sizeof(f))
//f = { 10.4300f, 4.19000f }
But this involves, a copy.
Is there a way to cast the c array inplace, changing its type so that I can avoid copying?
Is there a way to cast the c array inplace
No. However, if the array is sufficiently aligned to hold a float, what you can do after memcpy is to placement-new a copy of that float onto the array.
Optimisers are smart, and typically know that you copied same value back. Sometimes two copies for abstract machine results in zero copies for cpu.
This, with a little endianness convention corresponds
I know that I could obtain the latter with
Note that memcpy will always result in native byte order and thus you only get little endian result on little endian systems. Thus the assumption of the data being interpreted as little endian is not a portable assumption.
If you want to avoid assuming native endianness, you'll need to read bytes in correct order, shift / mask them into an unsigned integer, then memcpy (or bit_cast) that integer into float.
I'm making a LZW compressor that records its output in hexadecimal. It currently uses an uchar (OpenCV) for storing values, and outputs the uchar in hexadecimal.
However, I have been asked to allow the user to choose how many bytes are used when storing each value, so he could have, for example, 2 bytes for each value (or 32 bytes, it's up to him).
So, to manipulate the output, I was thinking of using an array of uchars (so, if the user asks for 32 bytes, I use an array of 32 uchars), and the question is: is there an easy way to write a big value to this array and outputting that value later without having to worry about what is in what index and other things? That is, to treat the array as just a x byte uchar? Should I use a vector?
Any help is appreciated.
You could use the following union
union pun_unsigned {
unsigned char c[sizeof(uint64_t)];
uint16_t u16;
uint32_t u32;
uint64_t u64;
};
Note that only conversions from or to (signed or unsigned) char are defined behaviour.
I want to have a data variable which will be an integer and its range will be from
0 - 1.000.000.
For example normal int variables can store numbers from -2,147,483,648 to 2,147,483,647.
I want the new data type to have less range so it can have LESS SIZE.
If there is a way to do that please let me know?
There isn't; you can't specify arbitrary ranges for variables like this in C++.
You need 20 bits to store 1,000,000 different values, so using a 32-bit integer is the best you can do without creating a custom data type (even then you'd only be saving 1 byte at 24 bits, since you can't allocate less than 8 bits).
As for enforcing the range of values, you could do that with a custom class, but I assume your goal isn't the validation but the size reduction.
So, there's no true good answer to this problem. Here are a few thoughts though:
If you're talking about an array of these 20 bit values, then perhaps the answers at this question will be helpful: Bit packing of array of integers
On the other hand, perhaps we are talking about an object, that has 3 int20_ts in it, and you'd like it to take up less space than it would normally. In that case, we could use a bitfield.
struct object {
int a : 20;
int b : 20;
int c : 20;
} __attribute__((__packed__));
printf("sizeof object: %d\n", sizeof(struct object));
This code will probably print 8, signifying that it is using 8 bytes of space, not the 12 that you would normally expect.
You can only have data types to be multiple of 8 bits. This is because, otherwise that data type won't be addressable. Imagine a pointer to a 5 bit data. That won't exist.
I want to define an integer variable in C/C++ such that my integer can store 10 bytes of data or may be a x bytes of data as defined by me in the program.
for now..!
I tried the
int *ptr;
ptr = (int *)malloc(10);
code. Now if I'm finding the sizeof ptr, it is showing as 4 and not 10. Why?
C and C++ compilers implement several sizes of integer (typically 1, 2, 4, and 8 bytes {8, 16, 32, and 64 bits}), but without some helper code to preform arithmetic operations you can't really make arbitrary sized integers.
The declarations you did:
int *ptr;
ptr = (int *)malloc(10);
Made what is probably a broken array of integers. Broken because unless you are on a system where (10 % sizeof(int) ) == 0) then you have extra bytes at the end which can't be used to store an entire integer.
There are several big number Class libraries you should be able to locate for C++ which do implement many of the operations you may want preform on your 10 byte (80 bit) integers. With C you would have to do operation as function calls because it lacks operator overloading.
Your sizeof(ptr) evaluated to 4 because you are using a machine that uses 4 byte pointers (a 32 bit system). sizeof tells you nothing about the size of the data that a pointer points to. The only place where this should get tricky is when you use sizeof on an array's name which is different from using it on a pointer. I mention this because arrays names and pointers share so many similarities.
Because on you machine, size of a pointer is 4 byte. Please note that type of the variable ptr is int *. You cannot get complete allocated size by sizeof operator if you malloc or new the memory, because sizeof is a compile time operator, meaning that at compile time the value is evaluated.
It is showing 4 bytes because a pointer on your platform is 4 bytes. The block of memory the pointer addresses may be of any arbitrary size, in your case it is 10 bytes. You need to create a data structure if you need to track that:
struct VariableInteger
{
int *ptr;
size_t size;
};
Also, using an int type for your ptr variable doesn't mean the language will allow you to do arithmetic operations on anything of a size different than the size of int on your platform.
Because the size of the pointer is 4. Try something like:
typedef struct
{
int a[10];
} big_int_t;
big_int_t x;
printf("%d\n", sizeof(x));
Note also that an int is typically not 1 byte in size, so this will probably print 20 or 40, depending on your platform.
Integers in C++ are of a fixed size. Do you mean an array of integers? As for sizeof, the way you are using it, it tells you that your pointer is four bytes in size. It doesn't tell you the size of a dynamically allocated block.
Few or no compilers support 10-byte integer arithmetic. If you want to use integers bigger than the values specified in <limits.h>, you'll need to either find a library with support for big integers or make your own class which defines the mathematical operators.
I believe what you're looking for is known as "Arbitrary-precision arithmetic". It allows you to have numbers of any size and any number of decimals. Instead of using fixed-size assembly level math functions, these libraries are coded to do math how one would do them on paper.
Here's a link to a list of arbitrary-precision arithmetic libraries in a few different languages, compliments of Wikipedia: link.
So, you know how the primitive of type char has the size of 1 byte? How would I make a primitive with a custom size? So like instead of an in int with the size of 4 bytes I make one with size of lets say 16.
Is there a way to do this? Is there a way around it?
It depends on why you are doing this. Usually, you can't use types of less than 8 bits, because that is the addressable unit for the architecture. You can use structs, however, to define different lengths:
struct s {
unsigned int a : 4; // a is 4 bits
unsigned int b : 4; // b is 4 bits
unsigned int c : 16; // c is 16 bits
};
However, there is no guarantee that the struct will be 24 bits long. Also, this can cause endian issues. Where you can, it's best to use system independent types, such as uint16_t, etc. You can also use bitwise operators and bit shifts to twiddle things very specifically.
Normally you'd just make a struct that represents the data in which you're interested. If it's 16 bytes of data, either it's an aggregate of a number of smaller types or you're working on a processor that has a native 16-byte integral type.
If you're trying to represent extremely large numbers, you may need to find a special library that handles arbitrarily-sized numbers.
In C++11, there is an excellent solution for this: std::aligned_storage.
#include <memory>
#include <type_traits>
int main()
{
typedef typename std::aligned_storage<sizeof(int)>::type memory_type;
memory_type i;
reinterpret_cast<int&>(i) = 5;
std::cout << reinterpret_cast<int&>(i) << std::endl;
return 0;
}
It allows you to declare a block of uninitialized storage on the stack.
If you want to make a new type, typedef it. If you want it to be 16-bytes in size, typedef a struct that has 16-bytes of member data within it. Just beware that quite often compilers will pad things on you to match your systems alignment needs. A 1 byte struct rarely remains 1 bytes without care.
You could just static cast to and from std::string. I don't know enough C++ to give an example, but I think this would be pretty intuitive.