Related
I have come to believe that the optimal size for a boolean variable is the natural width of the data, ie in C/C++ it is int. So for modern processors this is normally 32 bits. At the machine level declaring it as a byte for example requires a 32 bit fetch and then a mask.
However I have seen that a BOOL in iOS is 8 bits. I had assumed that people who used bytes were using left-over ideas from 8 bit processors.
I realise this question depends on the use and for most of the time the language defined boolean is the best bet, but there are times when you need to define your own, such as when you are converting code arriving from an external source or you want to write cross platform code.
It is also significant that if a boolean value is going to be packed into a serial stream, for sending over a serial line such as ethernet or storing it may be optimal to pack the boolean in fewer bits. But I feel that it is likely that it is optimal to pack and unpack from a processor optimal size.
So my question is am I correct in thinking that the optimal size for a boolean on a 32bit processor is 32 bits and if so why does iOS use 8 bits.
Yup you are right it depends. The big advantage of using an 8-bit is that you can pack more into a struct nicely.
Of course you'd be best off using flags in such a case.
The big issue, though, is that with a C/C++ "bool" you don't necessarily know how big it is. This means that you can't make assumptions about a struct (such as binary writing to disk) without the possibility of it breaking on another platform. In such a case using a known sized variable can be very useful and you may as well use as little space as possible if you are going to dump the structure to disk.
The notion of an 8-bit quantity involving a 32-bit fetch followed by hardware masking is mostly obsolete. In reality, a fetch from memory (on a modern processor) will normally be one L2 cache line (typically around 64-128 bytes). That being the case, essentially every size of item you deal with involves fetching a big chunk of data, and then using only some subset of what you fetched (but, assuming your data is more or less contiguous, probably using more of that data subsequently).
C++ attempts (not necessarily successfully) to optimize this a bit for you. An individual bool can be anywhere from one byte on up, though on most typical implementation, it's either one byte or four bytes. The (much reviled) std::vector<bool> uses some tricks to give a (sort of) vector-like interface, but still store each bool in one bit. In the process it loses the ability to be treated as a generic sequence container -- but when you're storing a lot of bools, and can live with the restrictions of using it in an array-like manner, it can actually be a lot more useful than many people believe.
When/if you want to retain normal container semantics and don't mind the extra storage space to keep them their native size, you can use another container (e.g., std::deque<bool>) instead. Especially if you only need to store a small collection of bools, this can often be a superior alternative.
It is architecture dependent, but on many 32 bit architectures 8 bit addressing is no less efficient than 32 bit; the "fetching and masking" as such is performed in hardware logic.
The optimal size in terms of storage space is of course 1 bit. You might for example use bit-fields or bit masking to pack multiple booleans in a single word. Some architectures such as 8051 have bit addressable memory. The more modern ARM Cortex-M architecture employs a technique called bit-banding that allows memory and hardware registers to be bit addressable
A protocol like FAST for encoding data is very clever in minimizing the amount of data needed to be sent. Essentially one gets a char*, reading the first couple bytes as an integer gives you an id number which points you to instructions on how to decode the rest (i.e. it tells you the rest of the bytes are, for example, respectivley an int, a string, an unsigned int, another unsigned int, a nested message etc..) and the next couple bytes tell you (in each bit) whether the subsequent fields are there or not. 8th bit in every byte is reserved to denote boundaries between the data.
Decoding such a protocol seems impossible to do without a linear traversal of bit ops (ands, ors, shifts, bit checks)...is there any way to do this faster?
I think you won't find a technique or approach faster than just described. Such protocols are meant to parse them sequentially. The only reason to continue searching is to find a more convenient way to work with a data.
As far as I see, there are three ways:
Just do it as you described below
Use low-level framework such as Boost::Spirit which has a binary parsers.
Try to use ready-to-go library: for example QuickFAST -- An implementation of the FAST protocol for native C++ and .NET.
Decoding such a protocol seems impossible to do without a linear
traversal of bit ops (ands, ors, shifts, bit checks)...is there any
way to do this faster?
The FAST protocol encodes values using a stop-bit approach, in other words, it parses byte by byte sequentially until it hits a byte with the 8th bit set to 1. You lose a bit in every byte doing that, but overall it is a fast way to encode many different fields without requiring a separator byte between them.
Take a look on this article about the FAST support from CoralFIX. It has a Java code example of a FAST decoder generated from an exchange XML template file.
Disclaimer: I am one of the developers of CoralFIX.
A character char maybe of size one byte but when it comes to four bytes value e.g int , how does the cpu differ it from an integer instead of four characters per byte?
The CPU executes code which you made.
This code tells the CPU how to treat the few bytes at a certain memory, like "take the four bytes at address 0x87367, treat them as an integer and add one to the value".
See, it's you who decide how to treat the memory.
Are you asking a question about CPU design?
Each CPU machine instruction is encoded so that the CPU knows how many bits to operate on.
The C++ compiler knows to emit 8-bit instructions for char and 32-bit instructions for int.
In general the CPU by itself knows nothing about the interpretation of values stored at certain memory locations, it's the code that is run (generated, in this case, by the compiler) that it's supposed to know it and use the correct CPU instructions to manipulate such values.
To say it in another way, the type of a variable is an abstraction of the language that tells to the compiler what code to generate to manipulate the memory.
Types in some way do exist at machine code level: there are various instructions to work on various types - i.e. the way the raw values stored in memory are interpreted, but it's up to the code executed to use the correct instructions to treat the values stored in memory correctly.
The compiler has table which named "symbols table", so the compiler know which type is every var and how it should regard it.
This depends on the architecture. Most systems use IEEE 754 Floating Point Representation and Two's Compliment for integer values, but it's up to the CPU in question. It knows how to turn those bytes into "values" appropriately.
On the CPU side, this mostly relates to two things: the registers and the instruction set (look at x86 for example).
The register is just a small chunk of memory that is closest to the CPU. Values are put there and used there for doing basic operations.
The instruction set will include a set of fixed names (like EAX, AX, etc.) for addressing memory slots on the register. Depending on the name, they can refer to shorter or longer slots (e.g. 8 bits, 16, 32, 64, etc.). Corresponding to those registers, there are operations (like addition, multiplication, etc.) which act on register values of certain size too. How the CPU actually executes the instructions or even stores the registers is not relevant (it's at the discretion of the manufacturer of the CPU), and it's up to the programmer (or compiler) to use the instruction set correctly.
The CPU itself has no idea what it's doing (it's not "intelligent") it just does the operations as they are requested. The compiler is the one which keeps track of the types of the variables and makes sure that the instructions that are generated and later executed by the program correspond to what you have coded (that's called "compilation"). But once the program is compiled, the CPU doesn't "keep track" of the types or sizes or anything like that (it would be too expensive to do so). Since compilers are pretty much guaranteed to produce instructions that are consistent, this is not an issue. Of course, if you programmed your own code in assembly and used mismatching registers and instructions, the CPU still wouldn't care, it would just make you program behave very weird (likely to crash).
Internally, a CPU may be wired to fetch 32 bits for an integer, which translates into 4 8-bit octets (bytes). The CPU does not regard the fetch as 4 bytes, but rather 32 bits.
The CPU is also internally wired to fetch 8 bits for a character (byte). In many processor architectures, the CPU fetches 32 bits from memory and internally ignores the unused bits (keeping the lowest 8 bits). This simplifies the processor architecture by only requiring fetches of 32 bits.
In efficient platforms, the memory is also accessible in 32-bit quantities. The data flow from the memory to the processor is often called the databus. In this description it would be 32 bits wide.
Other processor architectures can fetch 8 bits for a character. This removes the need for the processor to ignore 3 bytes from a 32-bit fetch.
Some programmers view integers in widths of bytes rather than bits. Thus a 32-bit integer would be thought of as 4 bytes. This can create confusion especially with bit ordering, a.k.a. Endianess. Some processors have the first byte containing the most significant bits (Big Endian), while others have the first byte representing the least significant bits (Little Endian). This leads to problems when transferring binary data between platforms.
Knowing that a processor's integer can hold 4 bytes and that it fetches 4 bytes at a time, many programmers like to pack 4 characters into an integer to improve performance. Thus the processor would require 1 fetch for 4 characters rather than 4 fetches for 4 characters. This performance improvement may be wasted by the execution time required to pack and unpack the characters from the integer.
In summary, I highly suggest you forget about how many bytes make up an integer and any relationship to a quantity of characters or bytes. This concept is only relevant on a few embedded platforms or a few high performance applications. Your objective is to deliver correct and robust code within a given duration. Performance and size concerns are at the end of a project, only tended if somebody complains. You will do fine in your career if you concentrate on the range and limitations of an integer rather than how much memory it occupies.
In an application I've been working on, I have to send a 256 x 256 matrix over a socket. I'm developing a visualization client for a offshore system simulator that runs on a cluster, and this matrix is a heightmap representing the current state of the oceanic surface.
This is an realtime application, so speed is a must. And, using an 256 x 256 matrix of floats, I have to send 256 kbytes of data every second, for a bandwith requirement of 256 kbytes/second.
That's a lot, at least for my application.
So, my question is, is there some good method for compressing this matrix before sending it via socket? And, if there is such a method, how much os reduction can I expect?
As my matrix represent an continuous surface, lossy compression methods are not a problem for me. I'm mostly concerned with the compression ratio, the time that it takes for the compression to take place and, finally, if there is already an implementation of this method for C++.
If you are far enough offshore and/or in calm sea states, breaking waves are not likely to be a big problem. If this is the case, then the surface will be nicely continuous, and will likely look a lot like the superposition of multiple sine/cosine waves in X and Y.
2-D FFTs of the surface might give you some insight. You might be able to represent the surface as a bandwidth-limited 2-D FFT, and discard data for higher spatial frequencies.
First off: Numeric representation
Since I assume the physical range of the ocean high is limited (say -50.0 to 50 metres waves) if I understand your description correctly, the typical IEEE 754-2008 the 32-bit floating point (i.e. float in C/C++) uses 8-bits for it's exponent (range of -126 to 127), and 23 bits for the fraction and one bit for the sign. Note, that's base 2.
If your minimal measured (or computed) variance is 1mm, 0.001 metres, then you can reduce the floating point size need to at least 16-bits. IEEE 754 does define a 16-bit floating point value, for uses as an interchange format. Which is 5-bits for the exponent, 10-bits for the fraction, and 1-bit for the sign. I believe that would be suitable, and immediately reduce your requirements to 128KB/s (1024Kbps).
After I originally wrote this I realized that if you wanted a uniform representation, with a very small amount of error in the representation (<= 2mm), then converting to an 16-bit signed integer that a unit represents 2mm of physical height. So that you would have a uniform representation with a resolution of 2mm, from with values ranging from -32768 (== -65536 mm or approximately -65 metres, -200 ft) to 32767 (== 65534 mm or approximately 65.5 metres).
That's a very simple alternative representation based on the simples assumption that a) the valid range of values is with +/- 65.5 metres, and that 2mm resolution is acceptable for transmission.
Second: Modifying (filtering) the data
I don't know if a Discrete Cosine Transform (DCT), similar to what is used in JPEG compression might be useful as a lossy compression technique. Basically this is quantizing the data so that nearly equal neighbouring values are smoothed such that they can then be better compressed by loss-less compression methods.
Third: Traditional Lossless Compression
Otherwise reasonably fast loss-less compression techniques such as Lempel-Ziv based methods (LZ, LZH, LZW, etc.) and perhaps the fast LZO method.
Well, a matrix is just a 2D signal. So there is a lot of different compression methods.
I would first try the easy solution: go for inflate/deflate without a container (basically a Zip, without a Zip). http://en.wikipedia.org/wiki/DEFLATE
The compression level will depend on the data, so I can not say, you must try it yourself.
Otherwise, the smarter way to do it would be to send only the changes. If you have access to the server-side code, you can just send few bytes of the heightmap that is changed every second. That would be the ideal solution, and if you wish, you can even compress the changed bytes with a deflater.
First, I'd figure out whether you can change the basic encoding from 32-bit floating point to some sort of fixed point. Assuming your values all fall within a fairly specific range (which seems likely) this may well be enough to cut your bandwidth in half. Depending on the range (and precision) needed, you might well want to represent an exponential value, so you're capturing a fairly decent idea of a wide range of magnitudes, but small differences are mostly ignored.
I'd guess you don't (probably) expect huge height changes from one sample to the next, and (at a guess) you probably fairly frequently see slopes that continue across a number of samples.
If that's the case, a predictive delta compression will probably work well. The basic idea is that for every (non-edge) point, you predict the value of that point based on surrounding points, and then encode only the difference between what that predicts and the actual value for the point. Depending on how much precision you can lose, you might well be able to encode that delta into a single byte (or maybe even two per byte).
Once you've done that, you can consider using Huffman compression or even arithmetic compression, but either one will slow your compression a fair amount.
First, look at your data. How many bits of information in those floats do you really need to send? Play with chopping off the least significant bits and seeing if it's accurate enough. Next start with the basic lossless algorithms. Compress it via the LZ, lossless methods (LZ78, LZW, ...) Get a baseline lossless ratio with a fast decompress speed. Then try BZip and the likes for a possibly better compression method and a slower decompress. You now have your lossless limit. Now try some lossy algorithms. JPEG and the likes have tunable lossy ratios and still decompress really fast. Finally, add some filters. Your data would probably compress very well with a simple differential pass along the X or Y axis (or try both and save the result as 1 bit.) This should make your data even more compressable.
All told, I'd guess you could get at least x3 your current bandwidth lossless and x10 with a little loss.
If i understand you right, the surface is measured every second. So if the changes within one second are not to high, why not treat the data as video and try a video compression algorithm. Video compression also takes motion compensation into account. Motion compensation is, among the other parts of the algorithm, important for the high video compression rates.
I would try the following:
Because the height change is expected to be quite small in every second, try sending the differences in height between 2 consecutive transfers. Multiplying these numbers with 10^n so we don't have to send them as floats but integers instead. Next, use the zero-compressed encoding (plz google for it) which can reduce the number of bytes to be sent significantly. After that, use some compression algorithm to pack these bytes.
I would think it can be reduced about 50% (unless the differences are big enough to be used 3-4 bytes for each).
Well first off, how many levels of height do you need? What's the maximum difference in height from wave peak to trough? I bet you could represent it with only 256 or 65536 possible height values which immediately cuts your data to 1/2 or 1/4 without you having to modify your data structure.
You can send the min/max values as floats as well each update, so the 256 levels are always used fully to get the most accuracy possible... as the sea gets rougher you lose accuracy.
You can also save an image of 256x256 using standard image algorithms. You've not quite got a standard format bitmap cut could treat it as a grayscale - if each vertex V is scaled to a value 0-255, you can build a color (V,V,V) and for free use a JPG library that already exists. Or you can probably find a DDS file format that has a single channel of 8/16/32-bit data too.
The first part of this I did in the past, successfully. The 2nd part, I'd be keen to avoid writing your own algorithm but get your data in a form it can use existing libraries, like D3DX for example.
I was thinking about compression, and it seems like there would have to be some sort of limit to the compression that could be applied to it, otherwise it'd be a single byte.
So my question is, how many times can I compress a file before:
It does not get any smaller?
The file becomes corrupt?
Are these two points the same or different?
Where does the point of diminishing returns appear?
How can these points be found?
I'm not talking about any specific algorithm or particular file, just in general.
For lossless compression, the only way you can know how many times you can gain by recompressing a file is by trying. It's going to depend on the compression algorithm and the file you're compressing.
Two files can never compress to the same output, so you can't go down to one byte. How could one byte represent all the files you could decompress to?
The reason that the second compression sometimes works is that a compression algorithm can't do omniscient perfect compression. There's a trade-off between the work it has to do and the time it takes to do it. Your file is being changed from all data to a combination of data about your data and the data itself.
Example
Take run-length encoding (probably the simplest useful compression) as an example.
04 04 04 04 43 43 43 43 51 52 11 bytes
That series of bytes could be compressed as:
[4] 04 [4] 43 [-2] 51 52 7 bytes (I'm putting meta data in brackets)
Where the positive number in brackets is a repeat count and the negative number in brackets is a command to emit the next -n characters as they are found.
In this case we could try one more compression:
[3] 04 [-4] 43 fe 51 52 7 bytes (fe is your -2 seen as two's complement data)
We gained nothing, and we'll start growing on the next iteration:
[-7] 03 04 fc 43 fe 51 52 8 bytes
We'll grow by one byte per iteration for a while, but it will actually get worse. One byte can only hold negative numbers to -128. We'll start growing by two bytes when the file surpasses 128 bytes in length. The growth will get still worse as the file gets bigger.
There's a headwind blowing against the compression program--the meta data. And also, for real compressors, the header tacked on to the beginning of the file. That means that eventually the file will start growing with each additional compression.
RLE is a starting point. If you want to learn more, look at LZ77 (which looks back into the file to find patterns) and LZ78 (which builds a dictionary). Compressors like zip often try multiple algorithms and use the best one.
Here are some cases I can think of where multiple compression has worked.
I worked at an Amiga magazine that shipped with a disk. Naturally, we packed the disk to the gills. One of the tools we used let you pack an executable so that when it was run, it decompressed and ran itself. Because the decompression algorithm had to be in every executable, it had to be small and simple. We often got extra gains by compressing twice. The decompression was done in RAM. Since reading a floppy was slow, we often got a speed increase as well!
Microsoft supported RLE compression on bmp files. Also, many word processors did RLE encoding. RLE files are almost always significantly compressible by a better compressor.
A lot of the games I worked on used a small, fast LZ77 decompressor. If you compress a large rectangle of pixels (especially if it has a lot of background color, or if it's an animation), you can very often compress twice with good results. (The reason? You only have so many bits to specify the lookback distance and the length, So a single large repeated pattern is encoded in several pieces, and those pieces are highly compressible.)
Generally the limit is one compression. Some algorithms results in a higher compression ratio, and using a poor algorithm followed by a good algorithm will often result in improvements. But using the good algorithm in the first place is the proper thing to do.
There is a theoretical limit to how much a given set of data can be compressed. To learn more about this you will have to study information theory.
In general for most algorithms, compressing more than once isn't useful. There's a special case though.
If you have a large number of duplicate files, the zip format will zip each independently, and you can then zip the first zip file to remove duplicate zip information. Specifically, for 7 identical Excel files sized at 108kb, zipping them with 7-zip results in a 120kb archive. Zipping again results in an 18kb archive. Going past that you get diminishing returns.
Suppose we have a file N bits long, and we want to compress it losslessly, so that we can recover the original file. There are 2^N possible files N bits long, and so our compression algorithm has to change one of these files to one of 2^N possible others. However, we can't express 2^N different files in less than N bits.
Therefore, if we can take some files and compress them, we have to have some files that length under compression, to balance out the ones that shorten.
This means that a compression algorithm can only compress certain files, and it actually has to lengthen some. This means that, on the average, compressing a random file can't shorten it, but might lengthen it.
Practical compression algorithms work because we don't usually use random files. Most of the files we use have some sort of structure or other properties, whether they're text or program executables or meaningful images. By using a good compression algorithm, we can dramatically shorten files of the types we normally use.
However, the compressed file is not one of those types. If the compression algorithm is good, most of the structure and redundancy have been squeezed out, and what's left looks pretty much like randomness.
No compression algorithm, as we've seen, can effectively compress a random file, and that applies to a random-looking file also. Therefore, trying to re-compress a compressed file won't shorten it significantly, and might well lengthen it some.
So, the normal number of times a compression algorithm can be profitably run is one.
Corruption only happens when we're talking about lossy compression. For example, you can't necessarily recover an image precisely from a JPEG file. This means that a JPEG compressor can reliably shorten an image file, but only at the cost of not being able to recover it exactly. We're often willing to do this for images, but not for text, and particularly not executable files.
In this case, there is no stage at which corruption begins. It starts when you begin to compress it, and gets worse as you compress it more. That's why good image-processing programs let you specify how much compression you want when you make a JPEG: so you can balance quality of image against file size. You find the stopping point by considering the cost of file size (which is more important for net connections than storage, in general) versus the cost of reduced quality. There's no obvious right answer.
Usually compressing once is good enough if the algorithm is good.
In fact, compressing multiple times could lead to an increase in the size
Your two points are different.
Compression done repeatedly and achieving no improvement in size reduction
is an expected theoretical condition
Repeated compression causing corruption
is likely to be an error in the implementation (or maybe the algorithm itself)
Now lets look at some exceptions or variations,
Encryption may be applied repeatedly without reduction in size
(in fact at times increase in size) for the purpose of increased security
Image, video or audio files increasingly compressed
will loose data (effectively be 'corrupted' in a sense)
You can compress a file as many times as you like. But for most compression algorithms the resulting compression from the second time on will be negligible.
Compression (I'm thinking lossless) basically means expressing something more concisely. For example
111111111111111
could be more consisely expressed as
15 X '1'
This is called run-length encoding. Another method that a computer can use is to find a pattern that is regularly repeated in a file.
There is clearly a limit to how much these techniques can be used, for example run-length encoding is not going to be effect on
15 X '1'
since there are no repeating patterns. Similarly if the pattern replacement methods converts long patterns to 3 char ones, reapplying it will have little effect, because the only remaining repeating patterns will be 3-length or shorter. Generally applying compression to a already compressed file makes it slightly bigger, because of various overheads. Applying good compression to a poorly compressed file is usually less effective than applying just the good compression.
How many times can I compress a file before it does not get any smaller?
In general, not even one. Whatever compression algorithm you use, there must always exists a file that does not get compressed at all, otherwise you could always compress repeatedly until you reach 1 byte, by your same argument.
How many times can I compress a file before it becomes corrupt?
If the program you use to compress the file does its job, the file will never corrupt (of course I am thinking to lossless compression).
You can compress infinite times. However, the second and further compressions usually will only produce a file larger than the previous one. So there is no point in compressing more than once.
It is a very good question. You can view to file from different point of view. Maybe you know a priori that this file contain arithmetic series.
Lets view to it as datastream of "bytes", "symbols", or "samples".
Some answers can give to you "information theory" and "mathematical statistics"
Please check monography of that researchers for full-deep understanding:
A. Kolmogorov
S. Kullback
ะก. Shannon
N. Wiener
One of the main concept in information theory is entropy.
If you have a stream of "bytes"....Entropy of that bytes doesn't depend on values of your "bytes", or "samples"...
If was defined only by frequencies with which bytes retrive different values.
Maximum entropy has place to be for full random datastream.
Minimum entropy, which equal to zero, has place to be for case when your "bytes" has identical value.
It does not get any smaller?
So the entropy is minimum number of bits per your "byte", which you need to use when writing information to the disk. Of course it is so if you use god's algorithm. Real life compression lossless heuristic algorithms are not so.
The file becomes corrupt?
I dont understand sense of the question. You can write no bits to the disk and you will write a corrupted file to the disk with size equal to 0 bits. Of course it is corrupted, but his size is zero bits.
Here is the ultimate compression algorithm (in Python) which by repeated use will compress any string of digits down to size 0 (it's left as an exercise to the reader how to apply this to a string of bytes).
def compress(digitString):
if digitString=="":
raise "already as small as possible"
currentLen=len(digitString)
if digitString=="0"*currentLen:
return "9"*(currentLen-1)
n=str(long(digitString)-1); #convert to number and decrement
newLen=len(n);
return ("0"*(currentLen-newLen))+n; # add zeros to keep same length
#test it
x="12";
while not x=="":
print x;
x=compress(x)
The program outputs 12 11 10 09 08 07 06 05 04 03 02 01 00 9 8 7 6 5 4 3 2 1 0 then empty string. It doesn't compress the string at each pass but it will with enough passes compress any digit string down to a zero length string. Make sure you write down how many times you send it through the compressor otherwise you won't be able to get it back.
I would like to state that the limit of compression itself hasn't really been adapted to tis fullest limit. Since each pixel or written language is in black or write outline. One could write a program that can decompile into what it was, say a book, flawlessly, but could compress the pixel pattern and words into a better system of compression. Meaning It would probably take a lot longer to compress, but as a system file gets larget gigs or terra bytes, the repeated letters of P and R and q and the black and white deviations could be compressed expotentially into a complex automated formula. THe mhcien doesn't need the data to make sense, it just can make a game making a highly compressed pattern. This in turn then allows us the humans to create a customized compression reading engine. Meaning now we have real compression power. Design an entire engine that can restore the information on the user side. The engine has its own language that is optimal, no spaces, just fillign black and white pixel boxes of the smallest set or even writing its own patternaic language. Nad thus it can at the same time for the mostoptiaml performace, give out a unique cipher or decompression formula when its down, and thus the file is optimally compressed and has a password that is unique for the engine to decompress it later. The machine can do amost limitlesset of iterations to compress the file further. Its like having a open book and putting all the written stories of humanity currently on to one A4 sheet. I don't know but it is another theory. So what happens is split volume, because the formula to decrompress would have its own size, evne the naming of the folder and or icon information has a size so one could go further to put every form of data a a string of information. hmm..
It all depends on the algorithm. In other words the question can be how many times a file can be compressed using this algorithm first, then this one next...
Example of a more advanced compression technique using "a double table, or cross matrix"
Also elimiates extrenous unnessacry symbols in algorithm
[PREVIOUS EXAMPLE]
Take run-length encoding (probably the simplest useful compression) as an example.
04 04 04 04 43 43 43 43 51 52 11 bytes
That series of bytes could be compressed as:
[4] 04 [4] 43 [-2] 51 52 7 bytes (I'm putting meta data in brackets)
[TURNS INTO]
04.43.51.52 VALUES
4.4.**-2 COMPRESSION
Further Compression Using Additonal Symbols as substitute values
04.A.B.C VALUES
4.4.**-2 COMPRESSION
In theory, we will never know, it is a never-ending thing:
In computer science and mathematics, the term full employment theorem
has been used to refer to a theorem showing that no algorithm can
optimally perform a particular task done by some class of
professionals. The name arises because such a theorem ensures that
there is endless scope to keep discovering new techniques to improve
the way at least some specific task is done. For example, the full
employment theorem for compiler writers states that there is no such
thing as a provably perfect size-optimizing compiler, as such a proof
for the compiler would have to detect non-terminating computations and
reduce them to a one-instruction infinite loop. Thus, the existence of
a provably perfect size-optimizing compiler would imply a solution to
the halting problem, which cannot exist, making the proof itself an
undecidable problem.
(source)