i'm trying to store bytes of data in to an enum,
enum bufferHeaders
{//output_input
void_void = 0x0003010000,
output_void = 0x00030B8000,
void_input = 0x0006010000,
led_on = 0xff1A01,
led_off = 0x001A01,
};
the leading zero's of my values are being ignored and the compiler stores my values as int.
later I need to be able to find how many bytes each set of data exactly is.
If you write
enum bufferHeaders : int
{
// ToDo - your values here
};
Then it's guaranteed that the backing type is int, so the number of bytes used to store each value is sizeof(int). But then the onus is on you to make sure that the enum values can fit into an int: 0xff1A01 for example might not.
Numerically speaking, preserving leading zeros makes no sense at all.
You are mixing up numerical value and representation!
There are always multiple ways to represent one and the same value: 77, 0x4d, 0x04d (guess, it is my birth year). I won't get older just because you change the representation.
Internally, the CPU has its own representation for a value, which is a specific bit pattern in one if its registers (or somewhere in RAM).
When printing out integer or enum values to console, the internal representation of the CPU is converted to some default representation suitable for most situations. If you want to have another representation, you need to explicitly tell which formatting to apply to retrieve such a representation:
std::cout << std::hex << std::showbase << std::internal << std::setfill('0')
<< std::setw(10) << 77 << std::endl;
Be aware that the stream modifiers above all but std::setw modify the stream persistently (until next modification), so you need to explicitly revert if you want to have different output format subsequently, whereas std::setw only applies for next value to be output and has to be re-applied for every subsequent value.
Is there a specific need for you to use enums? By default every value will have the same type and you seem to be focused on the numerical value stored. If you are really concerned about the representation it might be a good approach to leave the enum as int and storing the values in a std::map structure where you can use the enum value as the key itself and the value will be the numerical value whichever way you want to store it.
Related
I have learnt that enums are data types as opposed to data structures. I have also learnt that usually they are nominal in nature rather than ordinal. Consequentially, it does not make sense to iterate through an enum, or obtain the enum's constant value like you would an array, e.g. week[3].
However, I have come across instances where an index is used to obtain the value from an enum:
#include <iostream>
enum week {Mon=5, Tues, Wed};
int main()
{
week day = (week)0;
std::cout << day << "\n"; // outputs 0
day = (week)13;
std::cout << day << "\n"; // outputs 13
return 0;
}
How is this working?
I assumed (week)13 would not work, given there is nothing at this index. Is it that the cast is failing and falling back to the type to be cast?
As a side note: I believe some confusion with this style of code in C/C++ may occur due to other languages (C#) handling this case differently.
Edit:
The linked solutions don't answer the question I'm asking.
1 is about comparing integers to enums -- in this case I'm asking why casting an int to an enum gives a certain result.[2] (Assigning an int value to enum and vice versa in C++) mentions "Old enums (unscoped enums) can be converted to integer but the other way is not possible (implicitly). ", but by the accounts of my code, it does seem possible. Lastly 3 has nothing to do with this question: I'm not talking about assignment of enums, rather the casting of them via an integer.
The rule, roughly, is that any value hat fits in the bits needed to represent all of the enumerators can be held in an enumerated type. In the example in the question, Wed has the value 7, which requires three bits. So any value from 0 to 7 can be stored in an object of type week. Even if Wed wasn’t there, the remaining two values would need the same three low bits, so the values 0 to 7 would all still work. 13, on the other hand requires the fourth bit, which isn’t needed for any of the enumerators, so is not required to work.
I am implementing C++ 11 based application and I am using TinyCbor C library for Encoding and Decoding application specific data as below:
#include "cbor.h"
#include <iostream>
using namespace std;
int main() {
struct MyTest {
uint8_t varA;
float vabB;
};
MyTest obj;
obj.varA = 100; // If I set it t0 below 20 then it works
obj.varB = 10.10;
uint8_t buff[100];
//Encode
CborEncode encoder;
CborEncode array;
cbor_encoder_init(&encoder, buff, sizeof(buff), 0);
cbor_encoder_create_array(&encode, &array, CborIndefiniteLength);
cbor_encode_simple_value(&array, obj.varA);
cbor_encode_float(&array, obj.varB);
cbor_encoder_close_container(&encoder, &array);
// Decode
CborParser parse;
CborValue value;
cbor_parser_init(buff, sizeof(buff), 0, &parser, &value);
CborValue array;
cbor_value_enter_container(&value, &array);
uint8_t val;
cbor_value_get_simple_type(&array, &val);
// This prints blank
cout << "uint8_t value: " << static_cast<int>(val) << endl;
float fval;
cbor_value_get_simple_type(&array, &fval);
cout << "float value: " << fval << endl;
return 0;
}
Above code works when I set value of uint8_t varA to below 20, I see 20 getting printed on console but if I set more than 20 then sometimes it gives error CborErrorIllegalSimpleType. Or if value is set to 21 then it returns me type as CborBooleanType or CborNullType.
What is wrong with the code
How to encode and decode uint8_t using TinyCbor.
You have a few things going on here.
cbor_encoder_create_array(&encode, &array, CborIndefiniteLength);
Don’t use indefinite length unless you plan on streaming the encoding. If you have all the data in front of you when encoring, use defined length. See why here: Serialize fixed size Map to CBOR also, and I’m not sure what version you are using but indefinite length objects were at least fairly recently a TO DO list item for TinyCBOR.
cbor_value_get_simple_type(&array, &val);
You don’t need simple here. Simple types are primatives that are mostly undefined. CBOR’s default type is int64_t, signed long long. Simple does allow for a natural stop at 8bit, but 20 in simple is Boolean false. 21 true , 22 null. You discovered this already. It can’t store a negative, or float, and while you could use it to represent an 8bit like “100”, you really shouldn’t. The good news about CBOR’s types is that while the default is 64bit, it uses as little memory to encode as needed. So storing 100 in CBOR is one byte, not 8, not counting the overhead for a second. EDIT: Clarification, 2 bytes to store integer 100. CBOR is one byte for 0-23, but 24 and one has a byte of overhead.
By overhead I mean, when you encode a cbor integer, the first three bits mean UNSIGNED INTEGER (binary 000), one bit is reserved, the other 4 bits are value, so if you had a small value that fit in those four bits (value 0-23) you can get it all in one byte. That reserved bit is weird, see the chart here to understand it: https://en.m.wikipedia.org/wiki/CBOR So 24-255 requires 2 bytes encoded, etc etc. A far cry from it always using 8bytes just because it can.
The short version here is that CBOR won’t use more space than needed - BUT - it’s not a strongly typed serializer! If you mean for 100 inside 8bits to be stored in that array and someone stores a 10000 to be with 16bits, it’ll look/work fine until you go to parse and store a large number in an 8bit spot. You need to cast or validate your data. I recommend parse THEN validate.
cbor_value_get_simple_type(&array, &fval);
cout << "float value: " << fval << endl;
I’d have to look in TinyCBOR’s code, but I think this is a happy accident and not technically supported to work. Because simple types use the same three major bits you are able to get the value’s 64bit precision with get_simple. You should instead examine the type and make the correct call for get full or half precision float.
TinyCBOR is pretty nice, but there are definitely a couple gotchas hidden. It really helps to undersnd the CBOR encoding even though you are trusting the serializer to do the work for you.
Use CborError cbor_encode_uint(CborEncoder *encoder, uint64_t value) instead. This function will encode an unsigned integer value with the smallest representation.
I'm working on a microcontroller with only 2KB of SRAM and desperately need to conserve some memory. Trying to work out how I can put 8 0/1 values into a single byte using a bitfield but can't quite work it out.
struct Bits
{
int8_t b0:1, b1:1, b2:1, b3:1, b4:1, b5:1, b6:1, b7:1;
};
int main(){
Bits b;
b.b0 = 0;
b.b1 = 1;
cout << (int)b.b0; // outputs 0, correct
cout << (int)b.b1; // outputs -1, should be outputting 1
}
What gives?
All of your bitfield members are signed 1-bit integers. On a two's complement system, that means they can represent only either 0 or -1. Use uint8_t if you want 0 and 1:
struct Bits
{
uint8_t b0:1, b1:1, b2:1, b3:1, b4:1, b5:1, b6:1, b7:1;
};
As a word of caution - the standard doesn't really enforce an implementation scheme for bitfields. There is no guarantee that Bits will be 1 byte, and hypothetically it is entirely possible for it to be larger.
In practice however the actual implementations usually follow the obvious logic and it will "almost always" be 1 byte in size, but again, there is no requirement that it is guaranteed. Just in case you want to be sure, you could do it manually.
BTW -1 is still true but it -1 != true
As noted, these variables consist of only a sign bit, so the only available values are 0 and -1.
A more appropriate type for these bitfields would be bool. C++14 §9.6/4:
If the value true or false is stored into a bit-field of type bool of any size (including a one bit bit-field), the original bool value and the value of the bit-field shall compare equal.
Yes, std::uint8_t will do the job, but you might as well use the best fit. You won't need things like the cast for std::cout << (int)b.b0;.
Signed and unsigned integers are the answer.
Keep in mind that signaling is just an interpretation of bits, -1 or 1 is just the 'print' serializer interpreting the "variable type", as it was "revealed" to cout functions (look operator overloading) by compiler, the bit is the same, its value also (on/off) - since you have only 1 bit.
Don't care about that, but is a good practice to be explicit, so prefer to declare your variable with unsigned, it instructs the compiler to mount a proper code when you set or get the value to a serializer like "print" (cout).
"COUT" OPERATOR OVERLOADING:
"cout" works through a series of functions which the parameter overloading instructs the compiler which function to call. So, there are two functions, one receives an unsigned and another signed, thus they can interpret the same data differently, and you can change it, instructing the compiler to call another one using cast. See cout << myclass
For example:
int* x = new int;
int y = reinterpret_cast<int>(x);
y now holds the integer value of the memory address of variable x.
Variable y is of size int. Will that int size always be large enough to store the converted memory address of ANY TYPE being converted to int?
EDIT:
Or is safer to use long int to avoid a possible loss of data?
EDIT 2: Sorry people, to make this question more understandable the thing I want to find out here is the size of returned HEX value as a number, not size of int nor size of pointer to int but plain hex value. I need to get that value in in human-readable notation. That's why I'm using reinterpret_cast to convert that memory HEX value to DEC value. But to store the value safely I also need to fing out into what kind of variable to it: int, long - what type is big enough?
No, that's not safe. There's no guarantee sizeof(int) == sizeof(int*)
On a 64 bit platform you're almost guaranteed that it's not.
As for the "hexadecimal value" ... I'm not sure what you're talking about. If you're talking about the textual representation of the pointer in hexadecimal ... you'd need a string.
Edit to try and help the OP based on comments:
Because computers don't work in hex. I don't know how else to explain it. An int stores some amount of bits (binary), as does a long. Hexadecimal is a textual representation of those bits (specifically, the base16 representation). strings are used for textual representations of values. If you need a hexadecimal representation of a pointer, you would need to convert that pointer to text (hex).
Here's a c++ example of how you would do that:
test.cpp
#include <string>
#include <iostream>
#include <sstream>
int main()
{
int *p; // declare a pointer to an int.
std::ostringstream oss; // create a stringstream
std::string s; // create a string
// this takes the value of p (the memory address), converts it to
// the hexadecimal textual representation, and puts it in the stream
oss << std::hex << p;
// Get a std::string from the stream
s = oss.str();
// Display the string
std::cout << s << std::endl;
}
Sample output:
roach$ g++ -o test test.cpp
roach$ ./test
0x7fff68e07730
It's worth noting that the same thing is needed when you want to see the base10 (decimal) representation of a number - you have to convert it to a string. Everything in memory is stored in binary (base2)
On most 64-bit targets, int is still 32-bit, while pointer is 64bit, so it won't work.
http://en.wikipedia.org/wiki/64-bit#64-bit_data_models
What you probably want is to use std::ostream's formatting of addresses:
int x(0);
std::cout << &x << '\n';
As to the length of the produced string, you need to determine the size of the respective pointer: for each used byte the output will use two hex digit because each hex digit can represent 16 values. All bytes are typically used even if it is unlikely that you have memory for all bytes e.g. when the size of pointers is 8 bytes as happens on 64 bit systems. This is because the stacks often grow from the biggest address downwards while the executable code start at the beginning of the address range (well, the very first page may be unused to cause segmentation violations if it is touched in any way). Above the executable code live some data segments, followed by the heap, and lots of unused pages.
There is question addressing similar topic:
https://stackoverflow.com/a/2369593/1010666
Summary: do not try to write pointers into non-pointer variable.
If you need to print out the pointer value, there are other solutions.
The class below is supposed to represent a musical note. I want to be able to store the length of the note (e.g. 1/2 note, 1/4 note, 3/8 note, etc.) using only integers. However, I also want to be able to store the length using a floating point number for the rare case that I deal with notes of irregular lengths.
class note{
string tone;
int length_numerator;
int length_denominator;
public:
set_length(int numerator, int denominator){
length_numerator=numerator;
length_denominator=denominator;
}
set_length(double d){
length_numerator=d; // unfortunately truncates everything past decimal point
length_denominator=1;
}
}
The reason it is important for me to be able to use integers rather than doubles to store the length is that in my past experience with floating point numbers, sometimes the values are unexpectedly inaccurate. For example, a number that is supposed to be 16 occasionally gets mysteriously stored as 16.0000000001 or 15.99999999999 (usually after enduring some operations) with floating point, and this could cause problems when testing for equality (because 16!=15.99999999999).
Is it possible to convert a variable from int to double (the variable, not just its value)? If not, then what else can I do to be able to store the note's length using either an integer or a double, depending on the what I need the type to be?
If your only problem is comparing floats for equality, then I'd say to use floats, but read "Comparing floating point numbers" / Bruce Dawson first. It's not long, and it explains how to compare two floating numbers correctly (by checking the absolute and relative difference).
When you have more time, you should also look at "What Every Computer Scientist Should Know About Floating Point Arithmetic" to understand why 16 occasionally gets "mysteriously" stored as 16.0000000001 or 15.99999999999.
Attempts to use integers for rational numbers (or for fixed point arithmetic) are rarely as simple as they look.
I see several possible solutions: the first is just to use double. It's
true that extended computations may result in inaccurate results, but in
this case, your divisors are normally powers of 2, which will give exact
results (at least on all of the machines I've seen); you only risk
running into problems when dividing by some unusual value (which is the
case where you'll have to use double anyway).
You could also scale the results, e.g. representing the notes as
multiples of, say 64th notes. This will mean that most values will be
small integers, which are guaranteed exact in double (again, at least
in the usual representations). A number that is supposed to be 16 does
not get stored as 16.000000001 or 15.99999999 (but a number that is
supposed to be .16 might get stored as .1600000001 or .1599999999).
Before the appearance of long long, decimal arithmetic classes often
used double as a 52 bit integral type, ensuring at each step that the
actual value was exactly an integer. (Only division might cause a problem.)
Or you could use some sort of class representing rational numbers.
(Boost has one, for example, and I'm sure there are others.) This would
allow any strange values (5th notes, anyone?) to remain exact; it could
also be advantageous for human readable output, e.g. you could test the
denominator, and then output something like "3 quarter notes", or the
like. Even something like "a 3/4 note" would be more readable to a
musician than "a .75 note".
It is not possible to convert a variable from int to double, it is possible to convert a value from int to double. I'm not completely certain which you are asking for but maybe you are looking for a union
union DoubleOrInt
{
double d;
int i;
};
DoubleOrInt length_numerator;
DoubleOrInt length_denominator;
Then you can write
set_length(int numerator, int denominator){
length_numerator.i=numerator;
length_denominator.i=denominator;
}
set_length(double d){
length_numerator.d=d;
length_denominator.d=1.0;
}
The problem with this approach is that you absolutely must keep track of whether you are currently storing ints or doubles in your unions. Bad things will happen if you store an int and then try to access it as a double. Preferrably you would do this inside your class.
This is normal behavior for floating point variables. They are always rounded and the last digits may change valued depending on the operations you do. I suggest reading on floating points somewhere (e.g. http://floating-point-gui.de/) - especially about comparing fp values.
I normally subtract them, take the absolute value and compare this against an epsilon, e.g. if (abs(x-y)
Given you have a set_length(double d), my guess is that you actually need doubles. Note that the conversion from double to a fraction of integer is fragile and complexe, and will most probably not solve your equality problems (is 0.24999999 equal to 1/4 ?). It would be better for you to either choose to always use fractions, or always doubles. Then, just learn how to use them. I must say, for music, it make sense to have fractions as it is even how notes are being described.
If it were me, I would just use an enum. To turn something into a note would be pretty simple using this system also. Here's a way you could do it:
class Note {
public:
enum Type {
// In this case, 16 represents a whole note, but it could be larger
// if demisemiquavers were used or something.
Semiquaver = 1,
Quaver = 2,
Crotchet = 4,
Minim = 8,
Semibreve = 16
};
static float GetNoteLength(const Type ¬e)
{ return static_cast<float>(note)/16.0f; }
static float TieNotes(const Type ¬e1, const Type ¬e2)
{ return GetNoteLength(note1)+GetNoteLength(note2); }
};
int main()
{
// Make a semiquaver
Note::Type sq = Note::Semiquaver;
// Make a quaver
Note::Type q = Note::Quaver;
// Dot it with the semiquaver from before
float dottedQuaver = Note::TieNotes(sq, q);
std::cout << "Semiquaver is equivalent to: " << Note::GetNoteLength(sq) << " beats\n";
std::cout << "Dotted quaver is equivalent to: " << dottedQuaver << " beats\n";
return 0;
}
Those 'Irregular' notes you speak of can be retrieved using TieNotes