I'm looking for a way to apply a specific compression strategy using zlib-1.2.5.
I need to force zlib to pack a file without actually compressing it. The only part I want to compress is 0 padding at the end of the file, so that the output file does not have it. The size of input files ranges from 1MB to 1GB, padding is 512B.
Is this achievable with zlib?
Edit:
The code I'm working on is based on Unreal Engine 4:
https://github.com/EpicGames/UnrealEngine/blob/release/Engine/Source/Runtime/Core/Private/Misc/Compression.cpp
static bool appCompressMemoryZLIB(void* CompressedBuffer, int32& CompressedSize, const void* UncompressedBuffer, int32 UncompressedSize, int32 BitWindow, int32 CompLevel)
{
DECLARE_SCOPE_CYCLE_COUNTER(TEXT("Compress Memory ZLIB"), STAT_appCompressMemoryZLIB, STATGROUP_Compression);
ensureMsgf(CompLevel >= Z_DEFAULT_COMPRESSION, TEXT("CompLevel must be >= Z_DEFAULT_COMPRESSION"));
ensureMsgf(CompLevel <= Z_BEST_COMPRESSION, TEXT("CompLevel must be <= Z_BEST_COMPRESSION"));
CompLevel = FMath::Clamp(CompLevel, Z_DEFAULT_COMPRESSION, Z_BEST_COMPRESSION);
// Zlib wants to use unsigned long.
unsigned long ZCompressedSize = CompressedSize;
unsigned long ZUncompressedSize = UncompressedSize;
bool bOperationSucceeded = false;
// Compress data
// If using the default Zlib bit window, use the zlib routines, otherwise go manual with deflate2
if (BitWindow == 0 || BitWindow == DEFAULT_ZLIB_BIT_WINDOW)
{
bOperationSucceeded = compress2((uint8*)CompressedBuffer, &ZCompressedSize, (const uint8*)UncompressedBuffer, ZUncompressedSize, CompLevel) == Z_OK ? true : false;
}
else
{
z_stream stream;
stream.next_in = (Bytef*)UncompressedBuffer;
stream.avail_in = (uInt)ZUncompressedSize;
stream.next_out = (Bytef*)CompressedBuffer;
stream.avail_out = (uInt)ZCompressedSize;
stream.zalloc = &zalloc;
stream.zfree = &zfree;
stream.opaque = Z_NULL;
if (ensure(Z_OK == deflateInit2(&stream, CompLevel, Z_DEFLATED, BitWindow, MAX_MEM_LEVEL, Z_DEFAULT_STRATEGY)))
{
if (ensure(Z_STREAM_END == deflate(&stream, Z_FINISH)))
{
ZCompressedSize = stream.total_out;
if (ensure(Z_OK == deflateEnd(&stream)))
{
bOperationSucceeded = true;
}
}
else
{
deflateEnd(&stream);
}
}
}
// Propagate compressed size from intermediate variable back into out variable.
CompressedSize = ZCompressedSize;
return bOperationSucceeded;
}
With params: input buffer size = 65kB, CompLevel=Z_DEFAULT_COMPRESSION, MAX_MEM_LEVEL=9, BitWindow=15
No, there is not.
If all you want to do is strip the zeros at the end, then that is trivial. It's a few lines of code. You don't need zlib's 18,000 lines of code to do that.
If you also want to restore those zeros to the file at the other end ("decompressing it"), that is trivial as well. Just send a count of the zeros that were removed, with that count in a few bytes appended to the end. On the other end, replace that count with that many zero bytes. Also a few lines of code.
Related
I have a tiff file and would like to get a list of all tags used in that file. If I understand the TiffGetField() function correctly, it only gets the values of tags specified. But how do I know what tags the file uses? I would like to get all used tags in the file. Is there an easy way to get them with libtiff?
It seems to be a very manual process from my experience. I used the TIFF tag reference here https://www.awaresystems.be/imaging/tiff/tifftags.html to create a custom structure
typedef struct
{
TIFF_TAGS_BASELINE Baseline;
TIFF_TAGS_EXTENSION Extension;
TIFF_TAGS_PRIVATE Private;
} TIFF_TAGS;
With each substructure custom defined. For example,
typedef struct
{
TIFF_UINT32_T NewSubfileType; // TIFFTAG_SUBFILETYPE
TIFF_UINT16_T SubfileType; // TIFFTAG_OSUBFILETYPE
TIFF_UINT32_T ImageWidth; // TIFFTAG_IMAGEWIDTH
TIFF_UINT32_T ImageLength; // TIFFTAG_IMAGELENGTH
TIFF_UINT16_T BitsPerSample; // TIFFTAG_BITSPERSAMPLE
...
char *Copyright; // TIFFTAG_COPYRIGHT
} TIFF_TAGS_BASELINE;
Then I have custom readers:
TIFF_TAGS *read_tiff_tags(char *filename)
{
TIFF_TAGS *tags = NULL;
TIFF *tif = TIFFOpen(filename, "r");
if (tif)
{
tags = calloc(1, sizeof(TIFF_TAGS));
read_tiff_tags_baseline(tif, tags);
read_tiff_tags_extension(tif, tags);
read_tiff_tags_private(tif, tags);
TIFFClose(tif);
}
return tags;
}
Where you have to manually read each field. Depending on if it's an array, you'll have to check the return status. For simple fields, it's something like
// The number of columns in the image, i.e., the number of pixels per row.
TIFFGetField(tif, TIFFTAG_IMAGEWIDTH, &tags->Baseline.ImageWidth);
but for array fields you'll need something like this
// The scanner model name or number.
status = TIFFGetField(tif, TIFFTAG_MODEL, &infobuf);
if (status)
{
len = strlen(infobuf);
tags->Baseline.Model = malloc(sizeof(char) * (len + 1));
_mysprintf(tags->Baseline.Model, (int)(len + 1), "%s", infobuf);
tags->Baseline.Model[len] = 0;
}
else
{
tags->Baseline.Model = NULL;
}
// For each strip, the byte offset of that strip.
status = TIFFGetField(tif, TIFFTAG_STRIPOFFSETS, &arraybuf);
if (status)
{
tags->Baseline.NumberOfStrips = TIFFNumberOfStrips(tif);
tags->Baseline.StripOffsets = calloc(tags->Baseline.NumberOfStrips, sizeof(TIFF_UINT32_T));
for (strip = 0; strip < tags->Baseline.NumberOfStrips; strip++)
{
tags->Baseline.StripOffsets[strip] = arraybuf[strip];
}
}
else
{
tags->Baseline.StripOffsets = NULL;
}
My suggestion is to only read the fields you want/need and ignore everything else. Hope that helps.
I'm trying to use System.IO.Pipelines to parse large text files.
But I can't find no conversion function from ReadOnlySequence to ReadOnlySequence. For example like MemoryMarshal.Cast<byte,char>.
IMHO it is pretty useless having a generic ReadOnlySequence<T> if there is only one particular type (byte) applicable.
static async Task ReadPipeAsync(PipeReader reader, IStringValueFactory factory)
{
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
//ReadOnlySequence<char> chars = buffer.CastTo<char>(); ???
}
}
You would have to write a conversion operator to achieve this cast. You cannot cast it explicitly. Be aware that a char[] is two bytes, so you need to choose your encoding algorithm.
IMHO it is pretty useless having a generic ReadOnlySequence<T> if
there is only one particular type (byte) applicable.
While it's true that System.IO.Pipelines will only give you a ReadOnlySequence<byte> because of the fact that a PipeReader is attached to a Stream which is just a stream of bytes, there are other use cases for a ReadOnlySequence<T> eg,
ReadOnlySequence<char> roChars = new ReadOnlySequence<char>("some chars".ToCharArray());
ReadOnlySequence<string> roStrings = new ReadOnlySequence<string>(new string[] { "string1", "string2", "Another String" });
Your conversion operator would have similar logic to the below, but you would set your encoding appropriately.
static void Main(string[] args)
{
// create a 64k Readonly sequence of random bytes
var ros = new ReadOnlySequence<byte>(GenerateRandomBytes(64000));
//Optionally extract the section of the ReadOnlySequence we are interested in
var mySlice = ros.Slice(22222, 55555);
char[] charArray;
// Check if the slice is a single segment - not really necessary
// included for explanation only
if(mySlice.IsSingleSegment)
{
charArray = Encoding.ASCII.GetString(mySlice.FirstSpan).ToCharArray();
}
else
// Could only do this and always assume multiple spans
// which is highly likley for a PipeReader stream
{
Span<byte> theSpan = new byte[ros.Length];
mySlice.CopyTo(theSpan);
// ASCII Encoding - one byte of span = 2 bytes of char
charArray = Encoding.ASCII.GetString(theSpan).ToCharArray();
}
// Convert the char array back to a ReadOnlySegment<char>
var rosChar = new ReadOnlySequence<char>(charArray);
}
public static byte[] GenerateRandomBytes(int length)
{
// Create a buffer
byte[] randBytes;
if (length >= 1)
randBytes = new byte[length];
else
randBytes = new byte[1];
// Create a new RNGCryptoServiceProvider.
System.Security.Cryptography.RNGCryptoServiceProvider rand =
new System.Security.Cryptography.RNGCryptoServiceProvider();
// Fill the buffer with random bytes.
rand.GetBytes(randBytes);
// return the bytes.
return randBytes;
}
Here's some sample code which shows how I'm using OpenSSL:
BIO *CreateMemoryBIO() {
if (BIO *bio = BIO_new(BIO_s_mem())) {
BIO_set_mem_eof_return(bio, -1);
return bio;
}
throw std::runtime_error("Could not create memory BIO");
}
m_readBIO = CreateMemoryBIO();
m_writeBIO = CreateMemoryBIO();
SSL_set_bio(m_ssl, m_readBIO, m_writeBIO);
Now, if I do an SSL_Read, and I get SSL_ERROR_WANT_READ, is there any way for me to find out how much it had tried to read internally (in other words, how much do I need to write with BIO_write to m_readBIO before SSL_Read would be satisfied?)
A good lower bound would work for me as well, my issue is that I need to report how much data to read to the layer above me, and it will not return control to me until it has read that much data (and I don't want to degenerate into 1-byte reads).
I'm aware that SSL_Read and SSL_Write may both alternately read & write due to handshaking and such, but I'm interested in the 'current' read that is being done internally.
If it's not possible to do with the standard BIO_s_mem, I assume it could be done if I wrote my own BIO which 'remembered' the size of the last read request which failed, so any pointers to documentation on writing custom BIOs (which, to my knowledge, is supported by OpenSSL) would also be appreciated.
Thanks to CristiFati for the suggesting BIO_set_callback, it seems to work. If you want to make your comment into an answer, I'll accept it, but I want to put the details here for posterity.
Inside my 'SSLSocket' class:
in the constructor:
BIO_set_callback(m_readBIO, &BIOCallback);
BIO_set_callback_arg(m_readBIO, reinterpret_cast<char*>(this));
long SSLSocket::BIOCallback(
BIO *in_bio,
int in_operation,
const char* in_arg1,
int in_arg2,
long in_arg3,
long in_returnValue)
{
// in_bio isn't provided for BIO_CB_FREE.
if (BIO_CB_FREE == in_operation)
{
return in_returnValue;
}
assert(in_arg1);
return reinterpret_cast<SSLSocket*>(BIO_get_callback_arg(in_bio))->DoBIOCallback(
in_bio,
in_operation,
in_arg1,
in_arg2,
in_arg3,
in_returnValue);
long SSLSocket::DoBIOCallback(
BIO *in_bio,
int in_operation,
const char* in_arg1,
int in_arg2,
long in_arg3,
long in_returnValue)
{
UNUSED(in_arg3);
// We only care about the return callback for BIO_read()
if ((BIO_CB_READ | BIO_CB_RETURN) == in_operation)
{
const int shouldRetry = BIO_should_retry(in_bio);
const int bytesRequested = in_arg2;
assert(bytesRequested > 0);
if ((in_returnValue <= 0) && shouldRetry)
{
m_needBytes = bytesRequested;
}
else if ((in_returnValue > 0) && (in_returnValue < bytesRequested) && shouldRetry)
{
m_needBytes = bytesRequested - in_returnValue;
}
else
{
m_needBytes = 0;
}
}
return in_returnValue;
}
Then I use m_needBytes to decide how much to write in BIO_write().
Alright, so firstly I want to thank everyone for helping me so much in the last couple weeks, here's another one!!!
I have a file and I'm using Regex to find how many times the term "TamedName" comes up. That's the easy part :)
Originally, I was setting it up like this
StreamReader ff = new StreamReader(fileName);
String D = ff.ReadToEnd();
Regex rx = new Regex("TamedName");
foreach (Match Dino in rx.Matches(D))
{
if (richTextBox2.Text == "")
richTextBox2.Text += string.Format("{0} - {1:X} - {2}", Dino.Value, Dino.Index, ReadString(fileName, (uint)Dino.Index));
else
richTextBox2.Text += string.Format("\n{0} - {1:X} - {2}", Dino.Value, Dino.Index, ReadString(fileName, (uint)Dino.Index));
}
and it was returning completely incorrect index points, as pictured here
I'm fairly confident I know why it's doing this, probably because converting everything from a binary file to string, obviously not all the characters are going to translate, so that throws off the actual index count, so trying to relate that back doesn't work at all... The problem, I have NO clue how to use Regex with a binary file and have it translate properly :(
I'm using Regex vs a simple search function because the difference between each occurrence of "TamedName" is WAY too vast to code into a function.
Really hope you guys can help me with this one :( I'm running out of ideas!!
The problem is that you are reading in a binary file and the streamreader does some interpretation when it reads it into a Unicode string. It needed to be dealt with as bytes.
My code is below.(Just as an FYI, you will need to enable unsafe compilation to compile the code - this was to allow a fast search of the binary array)
Just for proper attribution, I borrowed the byte version of IndexOf from this SO answer by Dylan Nicholson
namespace ArkIndex
{
class Program
{
static void Main(string[] args)
{
string fileName = "TheIsland.ark";
string searchString = "TamedName";
byte[] bytes = LoadBytesFromFile(fileName);
byte[] searchBytes = System.Text.ASCIIEncoding.Default.GetBytes(searchString);
List<long> allNeedles = FindAllBytes(bytes, searchBytes);
}
static byte[] LoadBytesFromFile(string fileName)
{
FileStream fs = new FileStream(fileName, FileMode.Open);
//BinaryReader br = new BinaryReader(fs);
//StreamReader ff = new StreamReader(fileName);
MemoryStream ms = new MemoryStream();
fs.CopyTo(ms);
fs.Close();
return ms.ToArray();
}
public static List<long> FindAllBytes(byte[] haystack, byte[] needle)
{
long currentOffset = 0;
long offsetStep = needle.Length;
long index = 0;
List<long> allNeedleOffsets = new List<long>();
while((index = IndexOf(haystack,needle,currentOffset)) != -1L)
{
allNeedleOffsets.Add(index);
currentOffset = index + offsetStep;
}
return allNeedleOffsets;
}
public static unsafe long IndexOf(byte[] haystack, byte[] needle, long startOffset = 0)
{
fixed (byte* h = haystack) fixed (byte* n = needle)
{
for (byte* hNext = h + startOffset, hEnd = h + haystack.LongLength + 1 - needle.LongLength, nEnd = n + needle.LongLength; hNext < hEnd; hNext++)
for (byte* hInc = hNext, nInc = n; *nInc == *hInc; hInc++)
if (++nInc == nEnd)
return hNext - h;
return -1;
}
}
}
}
I'm working on a function that can uncompress the deflate compression, so i can read/draw png files in my c++ program. However, the deflate specification isn't very clear on some things.
So my main question is:
Paragraph 3.2.7. Compression with dynamic Huffman codes (BTYPE=10) of the specification state that
the distance code follows the literal/length
But it does not state how many bits the distance code occupy, is it an entire byte?
And how does the distance code relate?.. whats its use, really?
Any one have a general explanation? since the specification is kinda lacking in clarity.
The specification i found here:
http://www.ietf.org/rfc/rfc1951.txt
Edit (Here is my following code to use with puff inflate code.)
First the header (ConceptApp.h)
#include "resource.h"
#ifdef _WIN64
typedef unsigned long long SIZE_PTR;
#else
typedef unsigned long SIZE_PTR;
#endif
typedef struct _IMAGE {
DWORD Width; //Width in pixels.
DWORD Height; //Height in pixels.
DWORD BitsPerPixel; //24 (RGB), 32 (RGBA).
DWORD Planes; //Count of color planes
PBYTE Pixels; //Pointer to the first pixel of the image.
} IMAGE, *PIMAGE;
typedef DWORD LodePNGColorType;
typedef struct _LodePNGColorMode {
DWORD colortype;
DWORD bitdepth;
} LodePNGColorMode;
typedef struct LodePNGInfo
{
/*header (IHDR), palette (PLTE) and transparency (tRNS) chunks*/
unsigned compression_method;/*compression method of the original file. Always 0.*/
unsigned filter_method; /*filter method of the original file*/
unsigned interlace_method; /*interlace method of the original file*/
LodePNGColorMode color; /*color type and bits, palette and transparency of the PNG file*/
} LodePNGInfo;
typedef struct _ZLIB {
BYTE CMF;
BYTE FLG;
//DWORD DICTID; //if FLG.FDICT (Bit 5) is set, this variable follows.
//Compressed data here...
} ZLIB, *PZLIB;
typedef struct _PNG_IHDR {
DWORD Width;
DWORD Height;
BYTE BitDepth;
BYTE ColourType;
BYTE CompressionMethod;
BYTE FilterMethod;
BYTE InterlaceMethod;
} PNG_IHDR, *PPNG_IHDR;
typedef struct _PNG_CHUNK {
DWORD Length;
CHAR ChuckType[4];
} PNG_CHUNK, *PPNG_CHUNK;
typedef struct _PNG {
BYTE Signature[8];
PNG_CHUNK FirstChunk;
} PNG, *PPNG;
And the code .cpp file:
The main function can be found at the bottom of the file (LoadPng)
BYTE LoadPng(PPNG PngFile, PIMAGE ImageData)
{
PDWORD Pixel = 0;
DWORD ChunkSize = 0;
PPNG_IHDR PngIhdr = (PPNG_IHDR) ((SIZE_PTR) &PngFile->FirstChunk + sizeof(PNG_CHUNK));
DWORD Png_Width = Png_ReadDword((PBYTE)&PngIhdr->Width);
DWORD Png_Height = Png_ReadDword((PBYTE)&PngIhdr->Height);
DWORD BufferSize = (Png_Width*Png_Height) * 8; //This just a guess right now, havent done the math yet. !!!
ChunkSize = Png_ReadDword((PBYTE)&PngFile->FirstChunk.Length);
PPNG_CHUNK ThisChunk = (PPNG_CHUNK) ((SIZE_PTR)&PngFile->FirstChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
PPNG_CHUNK NextChunk;
PBYTE UncompressedData = (PBYTE) malloc(BufferSize);
INT RetValue = 0;
do
{
ChunkSize = Png_ReadDword((PBYTE)&ThisChunk->Length);
NextChunk = (PPNG_CHUNK) ((SIZE_PTR)ThisChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
if (Png_IsChunk(ThisChunk->ChuckType, "IDAT")) //Is IDAT ?
{
PZLIB iData = (PZLIB) ((SIZE_PTR)ThisChunk + 8); //8 is the length and chunkType.
PBYTE FirstBlock; //ponter to the first 3 bits of the deflate stuff.
if ((iData->CMF & 8) == 8) //deflate compression method.
{
if ((iData->FLG & 0x20) == 0x20)
{
FirstBlock = (PBYTE) ((SIZE_PTR)iData + 6); //DICTID Present.
}
else FirstBlock = (PBYTE) ((SIZE_PTR)iData + 2); //DICTID Not present.
RetValue = puff(UncompressedData, &BufferSize, FirstBlock, &ChunkSize); //I belive chunksize should be fine.
if (RetValue != 0)
{
WCHAR ErrorText[100];
swprintf_s(ErrorText, 100, L"%u", RetValue); //Convert data into string.
MessageBox(NULL, ErrorText, NULL, MB_OK);
}
}
}
ThisChunk = NextChunk;
} while (!Png_IsChunk(ThisChunk->ChuckType, "IEND"));
//LodePNGInfo ImageInfo;
//PBYTE Png_Real_Image = (PBYTE) malloc(BufferSize);
//ImageInfo.compression_method = PngIhdr->CompressionMethod;
//ImageInfo.filter_method = PngIhdr->FilterMethod;
//ImageInfo.interlace_method = PngIhdr->InterlaceMethod;
//ImageInfo.color.bitdepth = PngIhdr->BitDepth;
//ImageInfo.color.colortype = PngIhdr->ColourType;
//Remove Filter/crap blah blah.
//postProcessScanlines(Png_Real_Image, UncompressedData, Png_Width, Png_Height, &ImageInfo);
ImageData->Width = Png_Width;
ImageData->Height = Png_Height;
ImageData->Planes = 0; //Will need changed later.
ImageData->BitsPerPixel = 32; //Will need changed later.
ImageData->Pixels = 0;
//ImageData->Pixels = Png_Real_Image; //image not uncompressed yet.
return TRUE; //ret true for now. fix later.
}
I just hope to make clearer what is stated before--Huffman coding is a method for encoding values using a variable number of bits. In, say, ASCII coding, every letter gets the same number of bits no matter how frequently it is used. In Huffman coding, you could make "e" have fewer bits than an "X".
The trick in huffman coding is how the codes are prefixed. After reading each bit, the decoder knows, unambiguously, whether it has a value or needs to read another bit.
To comprehend the deflate process you need to understand LZ algorithm and Huffman coding.
On their own, both techniques are simple. The complexity comes from how they are put together.
LZ compresses by finding previous occurrences of a string. When a string has occurred previously, it is compressed by referencing the previous occurrence. The Distance is the offset to the previous occurrence. Distance and length specify that occurrence.
The problem is not with puff.
All the IDAT chunks in the png file need to be put together before calling puff.
It should look something like this:
BYTE LoadPng(PPNG PngFile, PIMAGE ImageData)
{
PDWORD Pixel = 0;
DWORD ChunkSize = 0;
PPNG_IHDR PngIhdr = (PPNG_IHDR) ((SIZE_PTR) &PngFile->FirstChunk + sizeof(PNG_CHUNK));
DWORD Png_Width = Png_ReadDword((PBYTE)&PngIhdr->Width);
DWORD Png_Height = Png_ReadDword((PBYTE)&PngIhdr->Height);
DWORD BufferSize = (Png_Width*Png_Height) * 8; //This just a guess right now, havent done the math yet. !!!
ChunkSize = Png_ReadDword((PBYTE)&PngFile->FirstChunk.Length);
PPNG_CHUNK ThisChunk = (PPNG_CHUNK) ((SIZE_PTR)&PngFile->FirstChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
PPNG_CHUNK NextChunk;
PBYTE UncompressedData = (PBYTE) malloc(BufferSize);
PBYTE TempBuffer = (PBYTE) malloc(BufferSize); //Put all idat chunks together befor uncompressing.
DWORD DeflateSize = 0; //All IDAT Chunks Added.
PZLIB iData = NULL;
PBYTE FirstBlock = NULL; //ponter to the first 3 bits of the deflate stuff.
INT RetValue = 0;
do
{
ChunkSize = Png_ReadDword((PBYTE)&ThisChunk->Length);
NextChunk = (PPNG_CHUNK) ((SIZE_PTR)ThisChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
if (Png_IsChunk(ThisChunk->ChuckType, "IDAT")) //Is IDAT ?
{
CopyMemory(&TempBuffer[DeflateSize], (PBYTE) ((SIZE_PTR)ThisChunk + 8), ChunkSize); //8 is the length and chunkType.
DeflateSize += ChunkSize;
}
ThisChunk = NextChunk;
} while (!Png_IsChunk(ThisChunk->ChuckType, "IEND"));
iData = (PZLIB) TempBuffer;
if ((iData->CMF & 8) == 8) //deflate compression method.
{
if ((iData->FLG & 0x20) == 0x20)
{
FirstBlock = (PBYTE) ((SIZE_PTR)iData + 6); //DICTID Present.
}
else FirstBlock = (PBYTE) ((SIZE_PTR)iData + 2); //DICTID Not present.
}
RetValue = puff(UncompressedData, &BufferSize, FirstBlock, &DeflateSize); //I belive chunksize should be fine.
if (RetValue != 0)
{
WCHAR ErrorText[100];
swprintf_s(ErrorText, 100, L"%u", RetValue);
MessageBox(NULL, ErrorText, NULL, MB_OK);
}
//LodePNGInfo ImageInfo;
//PBYTE Png_Real_Image = (PBYTE) malloc(BufferSize);
//ImageInfo.compression_method = PngIhdr->CompressionMethod;
//ImageInfo.filter_method = PngIhdr->FilterMethod;
//ImageInfo.interlace_method = PngIhdr->InterlaceMethod;
//ImageInfo.color.bitdepth = PngIhdr->BitDepth;
//ImageInfo.color.colortype = PngIhdr->ColourType;
//Remove Filter/crap blah blah.
//postProcessScanlines(Png_Real_Image, UncompressedData, Png_Width, Png_Height, &ImageInfo);
ImageData->Width = Png_Width;
ImageData->Height = Png_Height;
ImageData->Planes = 0; //Will need changed later.
ImageData->BitsPerPixel = 32; //Will need changed later.
ImageData->Pixels = 0;
//ImageData->Pixels = Png_Real_Image; //image not uncompressed yet.
return TRUE; //ret true for now. fix later.
}
You need to first read up on compression, since there is a lot of basic stuff that you're not getting. E.g. The Data Compression Book, by Nelson and Gailly.
Since it's a code, specifically a Huffman code, by definition the number of bits are variable.
If you don't know what the distance is for, then you need to first understand the LZ77 compression approach.
Lastly, aside from curiosity and self-education, there is no need for you to understand the deflate specification or to write your own inflate code. That's what zlib is for.