Converting 4 raw bytes into 32-bit floating point - c++

I'm trying to re-construct a 32-bit floating point value from an eeprom.
The 4 bytes in eeprom memory (0-4) are : B4 A2 91 4D
and the PC (VS Studio) reconstructs it correctly as 3.054199 * 10^8 (the floating point value I know should be there)
Now I'm moving this eeprom to be read from an 8-bit Arduino, so not sure if it's compiler/platform thing, but when I try reading the 4 bytes into a 32-bit dword, and then typecast it to a float, the value I get isn't even close.
Assuming the conversion can't be done automatically with the standard ansi-c compiler, how can the 4 bytes be manually parsed to be a float?

The safest way, and due to compiler optimization also as fast as any other, is to use memcpy:
uint32_t dword = 0x4D91A2B4;
float f;
memcpy(&f, &dw, 4);
Demo: http://ideone.com/riDfFw

As Shafik Yaghmour mentioned in his answer - it's probably an endianness issue, since that's the only logical problem you could encounter with such a low-level operation. While Shafiks answer in the question he linked, basically covers the process of handling such an issue, I'll just leave you some information:
As stated on the Anduino forums, Anduino uses Little Endian. If you're not sure about what will be the endianness of the system you'll end up working on, but want to make your code semi-multiplatform, you can check the endianness at runtime with a simple code snippet:
bool isBigEndian(){
int number = 1;
return (*(char*)&number != 1);
}
Be advised that - as all things - this consumes some of your procesor time and makes your program run slower, and while that's nearly always a bad thing, you can still use this to see the results in a debug version of your app.
How this works is that it tests the first byte of the int stored at the address pointed by &number. If the first byte is not 1, it means the bytes are Big Endian.
Also - this only will work if sizeof(int) > sizeof(char).
You can also embed this in your code:
float getFromEeprom(int address){
char bytes[sizeof(float)];
if(isBigEndian()){
for(int i=0;i<sizeof(float);i++)
bytes[sizeof(float)-i] = EEPROM.read(address+i);
}
else{
for(int i=0;i<sizeof(float);i++)
bytes[i] = EEPROM.read(address+i);
}
float result;
memcpy(&result, bytes, sizeof(float));
return result;
}

You need to cast at the pointer level.
int myFourBytes = /* something */;
float* myFloat = (float*) &myFourBytes;
cout << *myFloat;
Should work.
If the data is generated on a different platform that stores values in the opposite endianness, you'll need to manually swap the bytes around. E.g.:
unsigned char myFourBytes[4] = { 0xB4, 0xA2, 0x91, 0x4D };
std::swap(myFourBytes[0], myFourBytes[3]);
std::swap(myFourBytes[1], myFourBytes[2]);

Related

Writing a program for a computer that uses Litttle or Big endian. And have the same result [duplicate]

This question already has answers here:
Detecting endianness programmatically in a C++ program
(29 answers)
Closed 2 years ago.
This question is about endian's.
Goal is to write 2 bytes in a file for a game on a computer. I want to make sure that people with different computers have the same result whether they use Little- or Big-Endian.
Which of these snippet do I use?
char a[2] = { 0x5c, 0x7B };
fout.write(a, 2);
or
int a = 0x7B5C;
fout.write((char*)&a, 2);
Thanks a bunch.
From wikipedia:
In its most common usage, endianness indicates the ordering of bytes within a multi-byte number.
So for char a[2] = { 0x5c, 0x7B };, a[1] will be always 0x7B
However, for int a = 0x7B5C;, char* oneByte = (char*)&a; (char *)oneByte[0]; may be 0x7B or 0x5C, but as you can see, you have to play with casts and byte pointers (bear in mind the undefined behaviour when char[1], it is only for explanation purposes).
One way that is used quite often is to write a 'signature' or 'magic' number as the first data in the file - typically a 16-bit integer whose value, when read back, will depend on whether or not the reading platform has the same endianness as the writing platform. If you then detect a mismatch, all data (of more than one byte) read from the file will need to be byte swapped.
Here's some outline code:
void ByteSwap(void *buffer, size_t length)
{
unsigned char *p = static_cast<unsigned char *>(buffer);
for (size_t i = 0; i < length / 2; ++i) {
unsigned char tmp = *(p + i);
*(p + i) = *(p + length - i - 1);
*(p + length - i - 1) = tmp;
}
return;
}
bool WriteData(void *data, size_t size, size_t num, FILE *file)
{
uint16_t magic = 0xAB12; // Something that can be tested for byte-reversal
if (fwrite(&magic, sizeof(uint16_t), 1, file) != 1) return false;
if (fwrite(data, size, num, file) != num) return false;
return true;
}
bool ReadData(void *data, size_t size, size_t num, FILE *file)
{
uint16_t test_magic;
bool is_reversed;
if (fread(&test_magic, sizeof(uint16_t), 1, file) != 1) return false;
if (test_magic == 0xAB12) is_reversed = false;
else if (test_magic == 0x12AB) is_reversed = true;
else return false; // Error - needs handling!
if (fread(data, size, num, file) != num) return false;
if (is_reversed && (size > 1)) {
for (size_t i = 0; i < num; ++i) ByteSwap(static_cast<char *>(data) + (i*size), size);
}
return true;
}
Of course, in the real world, you wouldn't need to write/read the 'magic' number for every input/output operation - just once per file, and store the is_reversed flag for future use when reading data back.
Also, with proper use of C++ code, you would probably be using std::stream arguments, rather than the FILE* I have shown - but the sample I have posted has been extracted (with only very little modification) from code that I actually use in my projects (to do just this test). But conversion to better use of modern C++ should be straightforward.
Feel free to ask for further clarification and/or explanation.
NOTE: The ByteSwap function I have provided is not ideal! It almost certainly breaks strict aliasing rules and may well cause undefined behaviour on some platforms, if used carelessly. Also, it is not the most efficient method for small data units (like int variables). One could (and should) provide one's own byte-reversal function(s) to handle specific types of variables - a good case for overloading the function with different argument types).
Which of these snippet do I use?
The first one. It has same output regardless of native endianness.
But you'll find that if you need to interpret those bytes as some integer value, that is not so straightforward. char a[2] = { 0x5c, 0x7B } can represent either 0x5c7B (big endian) or 0x7B5c (little endian). So, which one did you intend?
The solution for cross platform interpretation of integers is to decide on particular byte order for the reading and writing. De-facto "standard" for cross platform data is to use big endian.
To write a number in big endian, start by bit-shifting the input value right so that the most significant byte is in the place of the least significant byte. Mask all other bytes (technically redundant in first iteration, but we'll loop back soon). Write this byte to the output. Repeat this for all other bytes in order of significance.
This algorithm produces same output regardless of the native endianness - it will even work on exotic "middle" endian systems if you ever encounter one. Writing to little endian is similar, but in reverse order.
To read a big endian value, read the first byte of input, shift it left so that it goes to the place of most significant byte. Combine the shifted byte with the result (initially zero) using bitwise-or. Repeat with the next byte by shifting to the second most significant place and so on.
to know the Endianess of a computer?
To know endianness of a system, you can use std::endian in the upcoming C++20. Prior to that, you can use implementation specific macros from endian.h header. Or you can do a simple calculation like you suggest.
But you never really need to know the endianness of a system. You can simply use the algorithms that I described, which work on systems of all endianness without having to know what that endianness is.

Concatenate binary bit-sized strings

I want to write to a file a series of binary strings whose length is expressed in bits rather than bytes. Take in consideration two strings s1 and s2 that in binary are respectively 011 and 01011. In this case the contents of the output file has to be: 01101011 (1 byte). I am trying to do this in the most efficient way possible since I have several million strings to concatenate for a total of several GB in output.
C++ has no way of working directly with bits because it aims at being a light layer
above the hardware and the hardware itself is not bit oriented. The very minimum
amount of bits you can read/write in one operation is a byte (normally 8 bits).
Also if you need to do disk i/o it's better to write your data in blocks instead of one byte at a time. The library has some buffering, but the earlier things are buffered the faster the code will be (less code is involved in passing data around).
A simple approach could be
unsigned char iobuffer[4096];
int bufsz; // how many bytes are present in the buffer
unsigned long long bit_accumulator;
int acc_bits; // how many bits are present in the accumulator
void writeCode(unsigned long long code, int bits) {
bit_accumulator |= code << acc_bits;
acc_bits += bits;
while (acc_bits >= 8) {
iobuffer[bufsz++] = bit_accumulator & 255;
bit_accumulator >>= 8;
acc_bits -= 8;
if (bufsz == sizeof(iobuffer)) {
// Write the buffer to disk
bufsz = 0;
}
}
}
There is no optimal way to solve your problem per se. But you can use a few pinches to speed things up:
Experiment with the file I/O sync flag. It might be that set/unset is significantly faster that the other, because of buffering and caching.
Try to use architecture sized variables so that they fit into the registers directly: uint32_t for 32 bit machines and uint64_t for 64 bit machines ...
"Volatile" might help to, keep things in registers
Use pointer and references for large data and copy small data blobs (to avoid unnecessary copy of large data and much lookups and page touching for small data)
Use mmap of the file for direct access and align your output to the page size of your architecture and hard disk (usually 4 KiB = 4096 Bytes)
Try to reduce branching (instructions like "if", "for", "while", "() ? :") and linearize your code.
And if that is not enough and when the going gets rough: Use assembler (but I would not recommend that for beginners)
I think multi threading would be contra productive in this case, because of the limited file writes that can be issued and the problem is not easy dividable into little tasks as each one needs to know how many bits after the other ones it has to start and then you would have to join all the results together in the end.
I've used the following in the past, it might help a bit...
FileWriter.h:
#ifndef FILE_WRITER_H
#define FILE_WRITER_H
#include <stdio.h>
class FileWriter
{
public:
FileWriter(const char* pFileName);
virtual ~FileWriter();
void AddBit(int iBit);
private:
FILE* m_pFile;
unsigned char m_iBitSeq;
unsigned char m_iBitSeqLen;
};
#endif
FileWriter.cpp:
#include "FileWriter.h"
#include <limits.h>
FileWriter::FileWriter(const char* pFileName)
{
m_pFile = fopen(pFileName,"wb");
m_iBitSeq = 0;
m_iBitSeqLen = 0;
}
FileWriter::~FileWriter()
{
while (m_iBitSeqLen > 0)
AddBit(0);
fclose(m_pFile);
}
void FileWriter::AddBit(int iBit)
{
m_iBitSeq |= iBit<<CHAR_BIT;
m_iBitSeq >>= 1;
m_iBitSeqLen++;
if (m_iBitSeqLen == CHAR_BIT)
{
fwrite(&m_iBitSeq,1,1,m_pFile);
m_iBitSeqLen = 0;
}
}
You can further improve it by accumulating the data up to a certain amount before writing it into the file.

read float and double from binary data in C++

I need to be able to read in a float or double from binary data in C++, similarly to Python's struct.unpack function. My issue is that the data I am receiving will always be big-endian. I have dealt with this for integer values as described here, but working byte by byte does not work with floating point values. I need a way to extract floating point values (both 32 bit floats and 64 bit doubles) in in C++, similar to how you would use struct.unpack(">f", num) or struct.unpack(">d", num) in Python.
here's an example of what I have tried:
stuct.unpack("d", num) ==> *(double*) str; // if str is a char* containing the data
That works fine if str is little-endian, but not if it is big-endian, as I know it will always be. The problem is that I do not know what the native endianness of the environment will be, so I need to be able to extract the binary data as big-endian at all times.
If you look at the linked question, you'll see this is easily using bitwise-ors and bitshifts for integer values, but that method does not work for floating point.
NOTE I should have pointed this out earlier, but I cannot use c++11 or any third party libraries other than Boost.
Why working byte by byte does not work with floating point values?
Just extract 32bit integer as usual, then reinterpret it as float: float f = *(float*)&i
And the same for 64bit integers and double
void ByteSwap(void * data, int size)
{
char * ptr = (char *) data;
for (int i = 0; i < size/2; ++i)
std::swap(ptr[i], ptr[size-1-i]);
}
bool LittleEndian()
{
int test = 1;
return *((char *)&test) == 1;
}
if (LittleEndian())
ByteSwap(&my_double, sizeof(double));

How to store double - endian independent

Despite the fact that big-endian computers are not very widely used, I want to store the double datatype in an independant format.
For int, this is really simple, since bit shifts make that very convenient.
int number;
int size=sizeof(number);
char bytes[size];
for (int i=0; i<size; ++i)
bytes[size-1-i] = (number >> 8*i) & 0xFF;
This code snipet stores the number in big endian format, despite the machine it is being run on. What is the most elegant way to do this for double?
The best way for portability and taking format into account, is serializing/deserializing the mantissa and the exponent separately. For that you can use the frexp()/ldexp() functions.
For example, to serialize:
int exp;
unsigned long long mant;
mant = (unsigned long long)(ULLONG_MAX * frexp(number, &exp));
// then serialize exp and mant.
And then to deserialize:
// deserialize to exp and mant.
double result = ldexp ((double)mant / ULLONG_MAX, exp);
The elegant thing to do is to limit the endianness problem to as small a scope as possible. That narrow scope is the I/O boundary between your program and the outside world. For example, the functions that send binary data to / receive binary data from some other application need to be aware of the endian problem, as do the functions that write binary data to / read binary data from some data file. Make those interfaces cognizant of the representation problem.
Make everything else blissfully ignorant of the problem. Use the local representation everywhere else. Represent a double precision floating point number as a double rather than an array of 8 bytes, represent a 32 bit integer as an int or int32_t rather than an array of 4 bytes, et cetera. Dealing with the endianness problem throughout your code is going to make your code bloated, error prone, and ugly.
The same. Any numeric object, including double, is eventually several bytes which are interpreted in a specific order according to endianness. So if you revert the order of the bytes you'll get exactly the same value in the reversed endianness.
char *src_data;
char *dst_data;
for (i=0;i<N*sizeof(double);i++) *dst_data++=src_data[i ^ mask];
// where mask = 7, if native == low endian
// mask = 0, if native = big_endian
The elegance lies in mask which handles also short and integer types: it's sizeof(elem)-1 if the target and source endianness differ.
Not very portable and standards violating, but something like this:
std::array<unsigned char, 8> serialize_double( double const* d )
{
std::array<unsigned char, 8> retval;
char const* begin = reinterpret_cast<char const*>(d);
char const* end = begin + sizeof(double);
union
{
uint8 i8s[8];
uint16 i16s[4];
uint32 i32s[2];
uint64 i64s;
} u;
u.i64s = 0x0001020304050607ull; // one byte order
// u.i64s = 0x0706050403020100ull; // the other byte order
for (size_t index = 0; index < 8; ++index)
{
retval[ u.i8s[index] ] = begin[index];
}
return retval;
}
might handle a platform with 8 bit chars, 8 byte doubles, and any crazy-ass byte ordering (ie, big endian in words but little endian between words for 64 bit values, for example).
Now, this doesn't cover the endianness of doubles being different than that of 64 bit ints.
An easier approach might be to cast your double into a 64 bit unsigned value, then output that as you would any other int.
void reverse_endian(double number, char (&bytes)[sizeof(double)])
{
const int size=sizeof(number);
memcpy(bytes, &number, size);
for (int i=0; i<size/2; ++i)
std::swap(bytes[i], bytes[size-i-1]);
}

Float to binary in C++

I'm wondering if there is a way to represent a float using a char in C++?
For example:
int main()
{
float test = 4.7567;
char result = charRepresentation(test);
return 0;
}
I read that probably using bitset I can do it but I'm not pretty sure.
Let's suppose that my float variable is 01001010 01001010 01001010 01001010 in binary.
If I want a char array of 4 elements, the first element will be 01001010, the second: 01001010 and so on.
Can I represent the float variable in a char array of 4 elements?
I suspect what you're trying to say is:
int main()
{
float test = 4.7567;
char result[sizeof(float)];
memcpy(result, &test, sizeof(test));
/* now result is storing the float,
but you can treat it as an array of
arbitrary chars
for example:
*/
for (int n = 0; n < sizeof(float); ++n)
printf("%x", result[n]);
return 0;
}
Edited to add: all the people pointing out that you can't fit a float into 8 bits are of course correct, but actually the OP is groping towards the understanding that a float, like all atomic datatypes, is ultimately a simple contiguous block of bytes. This is not obvious to all novices.
the best you can is create custom float that is byte size. or use char as fixed point decimal. on all cases this will lead to significant loss of precision.
using a union is clean and easy
union
{
float f;
unsigned int ul;
unsigned char uc[4];
} myfloatun;
myfloatun.f=somenum;
printf("0x%08X\n",myfloatun.ul);
Much safer from a compiler perspective than pointers. Memcpy works just fine too.
EDIT
Okay, okay, here are fully functional examples. Yes, you have to use unions with care if you dont keep an eye on how this compiler allocates the union and pads or aligns it it can break and this is why some/many say it is dangerous to use unions in this way. Yet the alternatives are considered safe?
Doing some reading C++ has its own problems with unions and a union may very well just not work. If you really meant C++ and not C then this is probably bad. If you said kleenex and meant tissues then this might work.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
typedef union
{
float f;
unsigned char uc[4];
} FUN;
void charRepresentation ( unsigned char *uc, float f)
{
FUN fun;
fun.f=f;
uc[0]=fun.uc[3];
uc[1]=fun.uc[2];
uc[2]=fun.uc[1];
uc[3]=fun.uc[0];
}
void floatRepresentation ( unsigned char *uc, float *f )
{
FUN fun;
fun.uc[3]=uc[0];
fun.uc[2]=uc[1];
fun.uc[1]=uc[2];
fun.uc[0]=uc[3];
*f=fun.f;
}
int main()
{
unsigned int ra;
float test;
char result[4];
FUN fun;
if(sizeof(fun)!=4)
{
printf("It aint gonna work!\n");
return(1);
}
test = 4.7567F;
charRepresentation(result,test);
for(ra=0;ra<4;ra++) printf("0x%02X ",(unsigned char)result[ra]); printf("\n");
test = 1.0F;
charRepresentation(result,test);
for(ra=0;ra<;ra++) printf("0x%02X ",(unsigned char)result[ra]); printf("\n");
test = 2.0F;
charRepresentation(result,test);
for(ra=0;ra<4;ra++) printf("0x%02X ",(unsigned char)result[ra]); printf("\n");
test = 3.0F;
charRepresentation(result,test);
for(ra=0;ra<4;ra++) printf("0x%02X ",(unsigned char)result[ra]); printf("\n");
test = 0.0F;
charRepresentation(result,test);
for(ra=0;ra<4;ra++) printf("0x%02X ",(unsigned char)result[ra]); printf("\n");
test = 0.15625F;
charRepresentation(result,test);
for(ra=0;ra<4;ra++) printf("0x%02X ",(unsigned char)result[ra]); printf("\n");
result[0]=0x3E;
result[1]=0xAA;
result[2]=0xAA;
result[3]=0xAB;
floatRepresentation(result,&test);
printf("%f\n",test);
return 0;
}
And the output looks like this
gcc fun.c -o fun
./fun
0x40 0x98 0x36 0xE3
0x3F 0x80 0x00 0x00
0x40 0x00 0x00 0x00
0x40 0x40 0x00 0x00
0x00 0x00 0x00 0x00
0x3E 0x20 0x00 0x00
0.333333
You can verify by hand, or look at this website as I took examples directly from it, the output matches what was expected.
http://en.wikipedia.org/wiki/Single_precision
What you do not ever want to do is point at memory with a pointer to look at it with a different type. I never understood why this practice is used so often, particularly with structs.
int broken_code ( void )
{
float test;
unsigned char *result
test = 4.567;
result=(unsigned char *)&test;
//do something with result here
test = 1.2345;
//do something with result here
return 0;
}
That code will work 99% of the time but not 100% of the time. It will fail when you least expect it and at the worst possible time, like the day after your most important customer receives it. Its the optimizer that eats your lunch with this coding style. Yes, I know most of you do this and were taught this and perhaps have never been burned....yet. That just makes it more painful when it finally does happen, because now you know that it can and has failed (with popular compilers like gcc, on common computers like a pc).
After seeing this fail when using this method for testing an fpu, programmatically building specific floating point numbers/patterns, I switched to the union approach which so far has never failed. By definition the elements in the union share the same chunk of storage, and the compiler and optimizer do not get confused about the two items in that shared chunk of storage being...in that same shared chunk of storage. With the above code you are relying on an assumption that there is non-register memory storage behind every use of the variables and that all variables are written back to that storage before the next line of code. Fine if you never optimize or if you use a debugger. The optimizer does not know in this case that result and test share the same chunk of memory, and that is the root of the problem/bug. To do the pointer game you have to get into putting volatile on everything, like a union you still have to know how the compiler aligns and pads, you still have to deal with endians.
The problem is generic that the compiler doesnt know the two items share the same memory space. For the specific trivial example above I have watched the compiler optimize out the assignment of the number to the floating point variable because that value/variable is never used. The address for the storage of that variable is used and if you were to say printf the *result data the compiler would not optimize out the result pointer and thus not optimize out the address to test and thus not optimize out the storage for test, but in this simple example it can and has happened where the numbers 4.567 and 1.2345 never make it into the compiled program. I have also see the compiler allocate the storage for test, but assign the numbers to a floating point register then never use that register nor copy the contents of that register to the storage that it has assigned. The reasons why it fails for less trivial examples can be harder to follow, often having to do with register allocation and eviction, change a line of code and it works, change another and it breaks.
Memcpy,
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
void charRepresentation ( unsigned char *uc, float *f)
{
memcpy(uc,f,4);
}
void floatRepresentation ( unsigned char *uc, float *f )
{
memcpy(f,uc,4);
}
int main()
{
unsigned int ra;
float test;
unsigned char result[4];
ra=0;
if(sizeof(test)!=4) ra++;
if(sizeof(result)!=4) ra++;
if(ra)
{
printf("It aint gonna work\n");
return(1);
}
test = 4.7567F;
charRepresentation(result,&test);
printf("0x%02X ",(unsigned char)result[3]);
printf("0x%02X ",(unsigned char)result[2]);
printf("0x%02X ",(unsigned char)result[1]);
printf("0x%02X\n",(unsigned char)result[0]);
test = 0.15625F;
charRepresentation(result,&test);
printf("0x%02X ",(unsigned char)result[3]);
printf("0x%02X ",(unsigned char)result[2]);
printf("0x%02X ",(unsigned char)result[1]);
printf("0x%02X\n",(unsigned char)result[0]);
result[3]=0x3E;
result[2]=0xAA;
result[1]=0xAA;
result[0]=0xAB;
floatRepresentation(result,&test);
printf("%f\n",test);
return 0;
}
gcc fcopy.c -o fcopy
./fcopy
0x40 0x98 0x36 0xE3
0x3E 0x20 0x00 0x00
0.333333
With the flaming I am going to get about my above comments, and depending on which side of the argument you choose to be on. Perhaps memcpy is your safest route. You still have to know the compiler very well, and manage your endians. The compiler should not screw up the memcpy it should store the registers to memory before the call, and execute in order.
You can only do so partially in a way that won't allow you to fully recover the original float. In general, this is called Quantization, and depending on your requirements there is an art to picking a good quantization. For example, floating point values used to represent R, G and B in a pixel will be converted to a char before being displayed on a screen.
Alternatively, it's easy to store a float in its entirety as four chars, with each char storing some of the information about the original number.
You can create, for that number, a fixed point value using 2 bits for the whole number and 5 bits for the fractional portion (or 6 if you want it to be unsigned). That would be able to store roughly 4.76 in terms of accuracy. You don't quite have enough size to represent that number much more accurately - unless you used a ROM lookup table of 256 entries where you are storing your info outside the number itself and in the translator.
int main()
{
float test = 4.7567;
char result = charRepresentation(test);
return 0;
}
If we ignore that your float is a float and convert 47567 into binary, we get 10111001 11001111. This is 16 bits, which is twice the size of a char (8 bits). Floats store their numbers by storing a sign bit (+ or -), an exponent (where to put the decimal point, in this case 10^-1), and then the significant digits (47567).
There's just not enough room in a char to store a float.
Alternatively, consider that a char can store only 256 different values. With four decimal places of precision, there are far more than 256 different values between 1 and 4.7567 or even 4 and 4.7567. Since you can't differentiate between more than 256 different values, you don't have enough room to store it.
You could conceivably write something that would 'translate' from a float to a char by limiting yourself to an extremely small range of values and only one or two decimal places*, but I can't think of any reason you would want to.
*You can store any value between 0 and 256 in a char, so if you always multiplied the value in the char by 10^-1 or 10^-2 (you could only use one of these options, not both since there isn't enough room to store the exponent) you could store any number between 0 and 25.6 or 0 and 2.56. I don't know what use this would have though.
A C char is only 8 bits (on most platforms). The basic problem this causes is two-fold. First, almost all FPUs in existence support IEEE floating point. That means floating point values either require 32 bits, or 64. Some support other non-standard sizes, but the only ones I'm aware of are 80 bits. None I have ever heard of support floats of only 8 bits. So you couldn't have hardware support for an 8-bit float.
More importantly, you wouldn't be able to get a lot of digits out of an 8-bit float. Remember that some bits are used to represent the exponent. You'd have almost no precision left for your digits.
Are you perhaps instead wanting to know about Fixed point? That would be doable in a byte.