I get the AudioBufferList from a wav file(Which Sample Rate is 44100HZ and the long time is 2second).
But I can't get 44100*2=88200 samples. In actually I got an AudiobufferList which contain 512 nNumberBuffers.
How can I get the sample from the AudioBufferList?
here is my method to get the samples from a wav file
-(float *) GetSoundFile:(NSString *)fileName
{
NSBundle *bundle = [NSBundle mainBundle];
NSString *path = [bundle pathForResource:[fileName stringByDeletingPathExtension]
ofType:[fileName pathExtension]];
NSURL *audioURL = [NSURL fileURLWithPath:path];
if (!audioURL)
{
NSLog(#"file: %# not found.", fileName);
}
OSStatus err = 0;
theFileLengthInFrames = 0; //this is global
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
ExtAudioFileRef extRef = NULL;
void* theData = NULL;
AudioStreamBasicDescription theOutputFormat;
// Open a file with ExtAudioFileOpen()
err = ExtAudioFileOpenURL((__bridge CFURLRef)(audioURL), &extRef);
// Get the audio data format
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_FileDataFormat, &thePropertySize, &theFileFormat);
theOutputFormat.mSampleRate = samplingRate = theFileFormat.mSampleRate;
theOutputFormat.mChannelsPerFrame = numChannels = 1;
theOutputFormat.mFormatID = kAudioFormatLinearPCM;
theOutputFormat.mBytesPerFrame = sizeof(Float32) * theOutputFormat.mChannelsPerFrame ;
theOutputFormat.mFramesPerPacket = theFileFormat.mFramesPerPacket;
theOutputFormat.mBytesPerPacket = theOutputFormat.mFramesPerPacket * theOutputFormat.mBytesPerFrame;
theOutputFormat.mBitsPerChannel = sizeof(Float32) * 8 ;
theOutputFormat.mFormatFlags = 9 | 12;
//set the output property
err = ExtAudioFileSetProperty(extRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &theOutputFormat);
// Here I Get the total frame count and write it into gloabal variable
thePropertySize = sizeof(theFileLengthInFrames);
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_FileLengthFrames, &thePropertySize, &theFileLengthInFrames);
UInt32 dataSize = (UInt32)(theFileLengthInFrames * theOutputFormat.mBytesPerFrame);
theData = malloc(dataSize);
AudioBufferList theDataBuffer;
if (theData)
{
theDataBuffer.mNumberBuffers = 1;
theDataBuffer.mBuffers[0].mDataByteSize = dataSize;
theDataBuffer.mBuffers[0].mNumberChannels = theOutputFormat.mChannelsPerFrame;
theDataBuffer.mBuffers[0].mData = theData;
// Read the data into an AudioBufferList
err = ExtAudioFileRead(extRef, (UInt32*)&theFileLengthInFrames, &theDataBuffer);
}
return (float*)theDataBuffer.mBuffers[0].mData;
//here is the data that has thefilelengthInframes amount frames
}
AudioBufferList should not be set to a fixed size because is a temporary buffer, hardware dependent.
The size is unpredictable and you can only set a preferable size, moreover is not granted to be the same size between 2 reading calls.
To get 88200 samples (or what you like) you must incrementally fill a new buffer using,
time by time, the samples in AudioBufferList.
I suggest to use this circular buffer https://github.com/michaeltyson/TPCircularBuffer
made for this purpose.
Hope this help
As I know, AudioBufferList must be configured by you to receive data to it and then it must be filled by some reading function (e.g. ExtAudioFileRead()). So you should prepare it by allocating buffers you need (usually 1 or 2), set nNumberBuffers to that number and read audio data to them. AudioBufferList just stores that buffers and they are will contain frames values.
Related
Im trying to learn the basics of ffmpeg writing (reading already works), so im just trying to take in an input .ts file and write out/passthrough the exact same h264 stream to a new output file. I dont get any compilation errors, but for some reason i cant figure out why my output file's framerate is very wrong. Also when i read in my output file, i get printouts saying "Packet corrupt (stream = 0, dts = #)"
I followed the instructions in the ffmpeg library comments so im not sure what im missing. I call initOutStream(), then initH264encoder(), then during the reading/decoding readH264Packet() is called repeatedly. (Removed code for readability sake, left relevant sections below);
Edit: If i put my output file through the actuall ffmpeg cmd app, the framerate issue seems to get fixed. Wonder where im messing up
void test::initOutStream() {
//create muxing context
outstreamContext = avformat_alloc_context();
//oformat
AVOutputFormat *guessFormat; //Populate oformat
guessFormat = av_guess_format(NULL, inputVideoUrl.c_str(), NULL);
outstreamContext->oformat = guessFormat;
outstreamContext->oformat->video_codec = AV_CODEC_ID_H264;
//outstreamContext->bit_rate = 400000; //No affect;
//pb
AVIOContext *outAVIOContext = nullptr;
//int result = avio_open(&outAVIOContext, outputVideoUrl.c_str(), AVIO_FLAG_WRITE);
int result = avio_open2(&outAVIOContext, outputVideoUrl.c_str(), AVIO_FLAG_WRITE, NULL, NULL); //Documentain said to use this method
outstreamContext->pb = outAVIOContext;
}
.
void test::initH264encoder() { //Frame -> packet
int result;
h264OutCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
h264OutStream = avformat_new_stream(outstreamContext, h264OutCodec);
h264OutStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
h264OutStream->codecpar->codec_id = AV_CODEC_ID_H264;
h264OutStream->codecpar->width = 640;
h264OutStream->codecpar->height = 480;
h264OutStream->id = H264_STREAM_ID;
h264OutStream->codecpar->color_range = AVCOL_RANGE_MPEG;
//h264OutStream->codecpar->bit_rate = 400000;
h264OutContext = avcodec_alloc_context3(h264OutCodec);
h264OutContext->width = 640;
h264OutContext->height = 480;
h264OutContext->time_base = (AVRational){1,static_cast<int>(29.97)};
h264OutContext->pix_fmt = AV_PIX_FMT_YUV420P;
result = avcodec_open2(h264OutContext, h264OutCodec, nullptr);
//Alloc packet + finish
outPacket = av_packet_alloc();
//Write header
result = avformat_write_header(outstreamContext, NULL);
}
Assume that reading was set up correctly
void test::readH264Packet(__unused uint64_t tick) {
//...av_read_frame(streamContext, inPacket);
//...avcodec_send_packet(h264Context, inPacket);
//...avcodec_receive_frame(h264Context, yuvFrame)
//My passthrough:
if(shouldOutputH264Stream){
result = avcodec_send_frame(h264OutContext, yuvFrame); //1. Encode frame to packet
result = avcodec_receive_packet(h264OutContext, outPacket2); //2. get encoded packet
result = av_interleaved_write_frame(outstreamContext, outPacket2); //3. write packet
//Write trailer and free happens later
}
}
I am currently messing the first time with iOS and Objective-C++. Im coming from C/C++ so please excuse my bad coding in the below examples.
I am trying to live stream the microphone audio of my iOS device over tcp, the iOS device is acting as server and sends the data to all clients that connect.
To do so, I am first using AVCaptureDevice and requestAccessForMediaType:AVMediaTypeAudio to request access to the microphone (along with the needed entry in the Info.plist).
Then I create a AVCaptureSession* using the below function:
AVCaptureSession* createBasicARecordingSession(aReceiver* ObjectReceivingAudioFrames){
AVCaptureSession* s = [[AVCaptureSession alloc] init];
AVCaptureDevice* aDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* aInput = NULL;
if([aDevice lockForConfiguration:NULL] == YES && aDevice){
aInput = [AVCaptureDeviceInput deviceInputWithDevice:aDevice error:nil];
[aDevice unlockForConfiguration];
}
else if(!aDevice){
fprintf(stderr, "[d] could not create device. (%p)\n", aDevice);
return NULL;
}
else{
fprintf(stderr, "[d] could not lock device.\n");
return NULL;
}
if(!aInput){
fprintf(stderr, "[d] could not create input.\n");
return NULL;
}
AVCaptureAudioDataOutput* aOutput = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t aQueue = dispatch_queue_create("aQueue", NULL);
if(!aOutput){
fprintf(stderr, "[d] could not create output.\n");
return NULL;
}
[aOutput setSampleBufferDelegate:ObjectReceivingAudioFrames queue:aQueue];
// the below line does only work on macOS
//aOutput.audioSettings = settings;
[s beginConfiguration];
if([s canAddInput:aInput]){
[s addInput:aInput];
}
else{
fprintf(stderr, "[d] could not add input.\n");
return NULL;
}
if([s canAddOutput:aOutput]){
[s addOutput:aOutput];
}
else{
fprintf(stderr, "[d] could not add output.\n");
return NULL;
}
[s commitConfiguration];
return s;
}
The aReceiver* class (?) is defined below and receives the audio frames provided by the AVCaptureAudioDataOutput* object. The frames are stored inside a std::vector.
(im adding the code as image as I could not get it formatted right...)
Then I start the AVCaptureSession* using [audioSession start]
When a tcp client connects I first create a AudioConverterRef and two AudioStreamBasicDescription to convert the audio frames to AAC, see below:
AudioStreamBasicDescription asbdIn, asbdOut;
AudioConverterRef converter;
asbdIn.mFormatID = kAudioFormatLinearPCM;
//asbdIn.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbdIn.mFormatFlags = 12;
asbdIn.mSampleRate = 44100;
asbdIn.mChannelsPerFrame = 1;
asbdIn.mFramesPerPacket = 1;
asbdIn.mBitsPerChannel = 16;
//asbdIn.mBytesPerFrame = (asbdIn.mBitsPerChannel / 8) * asbdIn.mBitsPerChannel;
asbdIn.mBytesPerFrame = 2;
asbdIn.mBytesPerPacket = asbdIn.mBytesPerFrame;
asbdIn.mReserved = 0;
asbdOut.mFormatID = kAudioFormatMPEG4AAC;
asbdOut.mFormatFlags = 0;
asbdOut.mSampleRate = 44100;
asbdOut.mChannelsPerFrame = 1;
asbdOut.mFramesPerPacket = 1024;
asbdOut.mBitsPerChannel = 0;
//asbdOut.mBytesPerFrame = (asbdOut.mBitsPerChannel / 8) * asbdOut.mBitsPerChannel;
asbdOut.mBytesPerFrame = 0;
asbdOut.mBytesPerPacket = asbdOut.mBytesPerFrame;
asbdOut.mReserved = 0;
OSStatus err = AudioConverterNew(&asbdIn, &asbdOut, &converter);
Then I create a AudioBufferList* to store the encoded frames:
while(audioInput.locked){ // audioInput is my aReceiver*
usleep(0.2 * 1000000);
}
audioInput.locked = true;
UInt32 RequestedPackets = 8192;
//AudioBufferList* aBufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
AudioBufferList* aBufferList = static_cast<AudioBufferList*>(calloc(1, offsetof(AudioBufferList, mBuffers) + (sizeof(AudioBuffer) * 1)));
aBufferList->mNumberBuffers = 1;
aBufferList->mBuffers[0].mNumberChannels = asbdIn.mChannelsPerFrame;
aBufferList->mBuffers[0].mData = static_cast<void*>(calloc(RequestedPackets, asbdIn.mBytesPerFrame));
aBufferList->mBuffers[0].mDataByteSize = asbdIn.mBytesPerFrame * RequestedPackets;
Then I go through the frames stored in the std::vector mentioned earlier and pass them to AudioConverterFillComplexBuffer(). After conversion, i concat all encoded frames into one NSMutableData which I then write() to the socket connected to the client.
long aBufferListSize = audioInput.aBufferList.size();
while(aBufferListSize > 0){
err = AudioConverterFillComplexBuffer(converter, feedAFrames, static_cast<void*>(&audioInput.aBufferList[audioInput.aBufferList.size() - aBufferListSize]), &RequestedPackets, aBufferList, NULL);
NSMutableData* encodedData = [[NSMutableData alloc] init];
long encodedDataLen = 0;
for(int i = 0; i < aBufferList->mNumberBuffers; i++){
Float32* frame = (Float32*)aBufferList->mBuffers[i].mData;
[encodedData appendBytes:frame length:aBufferList->mBuffers[i].mDataByteSize];
encodedDataLen += aBufferList->mBuffers[i].mDataByteSize;
}
unsigned char* encodedDataBytes = (unsigned char*)[encodedData bytes];
fprintf(stderr, "[d] got %li encoded bytes to send...\n", encodedDataLen);
long bytes = write(Client->GetFD(), encodedDataBytes, encodedDataLen);
fprintf(stderr, "[d] written %li of %li bytes.\n", bytes, encodedDataLen);
usleep(0.2 * 1000000);
aBufferListSize--;
}
audioInput.aBufferList.clear();
audioInput.locked = false;
Below is the feedAFrames() callback used in the AudioConverterFillComplexBuffer() call:
(again this is an image of the code, same reason as above)
Step 5 to 7 are repeated until the tcp connection is closed.
Each step runs without any noticeable error (I know I could include way better error handling here), and I do get data out of step 3 and 7. However it does not seem to be AAC what comes out at the end.
As im rather new to all of this, im really not sure what my error is, im sure there are several things I made wrong. It is kind of hard to find suitable example code of what I am trying to do, and the above is the best I could come up with until now with all that I have found paired with the apple dev documentation.
I hope someone might take some time to explain me what I did wrong and how I can get this to work. Thanks for reading until here!
I have problem sending SChannel TLS message larger than the negotiated maximum length.
When "EncryptSend" is called with a buffer larger than SecPkgContext_StreamSizes.cbMaximumMessage, the part greater than SecPkgContext_StreamSizes.cbMaximumMessage is not understood by the server (nor by Wireshark).
You should be able to break your data into chunks that are less than or equal to the cbMaximumMessage size. For example, if you are sending VOID* pvData of ULONG cbData bytes, then...
while(0 < cbData)
{
ULONG cbChunk = (cbData > m_Sizes.cbMaximumMessage) ? m_Sizes.cbMaximumMessage : cbData;
Message.ulVersion = SECBUFFER_VERSION;
Message.cBuffers = ARRAYSIZE(Buffers);
Message.pBuffers = Buffers;
Buffers[0].pvBuffer = m_pSendBuffer;
Buffers[0].cbBuffer = m_Sizes.cbHeader;
Buffers[0].BufferType = SECBUFFER_STREAM_HEADER;
Buffers[1].pvBuffer = m_pSendBuffer + m_Sizes.cbHeader;
Buffers[1].cbBuffer = cbChunk;
Buffers[1].BufferType = SECBUFFER_DATA;
CopyMemory(Buffers[1].pvBuffer, pvData, cbChunk);
Buffers[2].pvBuffer = m_pSendBuffer + m_Sizes.cbHeader + cbChunk;
Buffers[2].cbBuffer = m_Sizes.cbTrailer;
Buffers[2].BufferType = SECBUFFER_STREAM_TRAILER;
Buffers[3].BufferType = SECBUFFER_EMPTY;
hr = EncryptMessage(&m_hContext, &Message, 0, 0);
if(FAILED(hr))
break;
hr = pSocket->Send(m_pSendBuffer, Buffers[0].cbBuffer + cbChunk + Buffers[2].cbBuffer);
if(FAILED(hr))
break;
pvData = reinterpret_cast<PBYTE>(pvData) + cbChunk;
cbData -= cbChunk;
}
In each iteration of the loop, a chunk that is less than or equal to the maximum size is encrypted and sent. For this to work, the mechanism used to send data to the socket will likely need to employ a buffering strategy in case the socket's internal buffer is filled to capacity.
I'm working on a function that can uncompress the deflate compression, so i can read/draw png files in my c++ program. However, the deflate specification isn't very clear on some things.
So my main question is:
Paragraph 3.2.7. Compression with dynamic Huffman codes (BTYPE=10) of the specification state that
the distance code follows the literal/length
But it does not state how many bits the distance code occupy, is it an entire byte?
And how does the distance code relate?.. whats its use, really?
Any one have a general explanation? since the specification is kinda lacking in clarity.
The specification i found here:
http://www.ietf.org/rfc/rfc1951.txt
Edit (Here is my following code to use with puff inflate code.)
First the header (ConceptApp.h)
#include "resource.h"
#ifdef _WIN64
typedef unsigned long long SIZE_PTR;
#else
typedef unsigned long SIZE_PTR;
#endif
typedef struct _IMAGE {
DWORD Width; //Width in pixels.
DWORD Height; //Height in pixels.
DWORD BitsPerPixel; //24 (RGB), 32 (RGBA).
DWORD Planes; //Count of color planes
PBYTE Pixels; //Pointer to the first pixel of the image.
} IMAGE, *PIMAGE;
typedef DWORD LodePNGColorType;
typedef struct _LodePNGColorMode {
DWORD colortype;
DWORD bitdepth;
} LodePNGColorMode;
typedef struct LodePNGInfo
{
/*header (IHDR), palette (PLTE) and transparency (tRNS) chunks*/
unsigned compression_method;/*compression method of the original file. Always 0.*/
unsigned filter_method; /*filter method of the original file*/
unsigned interlace_method; /*interlace method of the original file*/
LodePNGColorMode color; /*color type and bits, palette and transparency of the PNG file*/
} LodePNGInfo;
typedef struct _ZLIB {
BYTE CMF;
BYTE FLG;
//DWORD DICTID; //if FLG.FDICT (Bit 5) is set, this variable follows.
//Compressed data here...
} ZLIB, *PZLIB;
typedef struct _PNG_IHDR {
DWORD Width;
DWORD Height;
BYTE BitDepth;
BYTE ColourType;
BYTE CompressionMethod;
BYTE FilterMethod;
BYTE InterlaceMethod;
} PNG_IHDR, *PPNG_IHDR;
typedef struct _PNG_CHUNK {
DWORD Length;
CHAR ChuckType[4];
} PNG_CHUNK, *PPNG_CHUNK;
typedef struct _PNG {
BYTE Signature[8];
PNG_CHUNK FirstChunk;
} PNG, *PPNG;
And the code .cpp file:
The main function can be found at the bottom of the file (LoadPng)
BYTE LoadPng(PPNG PngFile, PIMAGE ImageData)
{
PDWORD Pixel = 0;
DWORD ChunkSize = 0;
PPNG_IHDR PngIhdr = (PPNG_IHDR) ((SIZE_PTR) &PngFile->FirstChunk + sizeof(PNG_CHUNK));
DWORD Png_Width = Png_ReadDword((PBYTE)&PngIhdr->Width);
DWORD Png_Height = Png_ReadDword((PBYTE)&PngIhdr->Height);
DWORD BufferSize = (Png_Width*Png_Height) * 8; //This just a guess right now, havent done the math yet. !!!
ChunkSize = Png_ReadDword((PBYTE)&PngFile->FirstChunk.Length);
PPNG_CHUNK ThisChunk = (PPNG_CHUNK) ((SIZE_PTR)&PngFile->FirstChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
PPNG_CHUNK NextChunk;
PBYTE UncompressedData = (PBYTE) malloc(BufferSize);
INT RetValue = 0;
do
{
ChunkSize = Png_ReadDword((PBYTE)&ThisChunk->Length);
NextChunk = (PPNG_CHUNK) ((SIZE_PTR)ThisChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
if (Png_IsChunk(ThisChunk->ChuckType, "IDAT")) //Is IDAT ?
{
PZLIB iData = (PZLIB) ((SIZE_PTR)ThisChunk + 8); //8 is the length and chunkType.
PBYTE FirstBlock; //ponter to the first 3 bits of the deflate stuff.
if ((iData->CMF & 8) == 8) //deflate compression method.
{
if ((iData->FLG & 0x20) == 0x20)
{
FirstBlock = (PBYTE) ((SIZE_PTR)iData + 6); //DICTID Present.
}
else FirstBlock = (PBYTE) ((SIZE_PTR)iData + 2); //DICTID Not present.
RetValue = puff(UncompressedData, &BufferSize, FirstBlock, &ChunkSize); //I belive chunksize should be fine.
if (RetValue != 0)
{
WCHAR ErrorText[100];
swprintf_s(ErrorText, 100, L"%u", RetValue); //Convert data into string.
MessageBox(NULL, ErrorText, NULL, MB_OK);
}
}
}
ThisChunk = NextChunk;
} while (!Png_IsChunk(ThisChunk->ChuckType, "IEND"));
//LodePNGInfo ImageInfo;
//PBYTE Png_Real_Image = (PBYTE) malloc(BufferSize);
//ImageInfo.compression_method = PngIhdr->CompressionMethod;
//ImageInfo.filter_method = PngIhdr->FilterMethod;
//ImageInfo.interlace_method = PngIhdr->InterlaceMethod;
//ImageInfo.color.bitdepth = PngIhdr->BitDepth;
//ImageInfo.color.colortype = PngIhdr->ColourType;
//Remove Filter/crap blah blah.
//postProcessScanlines(Png_Real_Image, UncompressedData, Png_Width, Png_Height, &ImageInfo);
ImageData->Width = Png_Width;
ImageData->Height = Png_Height;
ImageData->Planes = 0; //Will need changed later.
ImageData->BitsPerPixel = 32; //Will need changed later.
ImageData->Pixels = 0;
//ImageData->Pixels = Png_Real_Image; //image not uncompressed yet.
return TRUE; //ret true for now. fix later.
}
I just hope to make clearer what is stated before--Huffman coding is a method for encoding values using a variable number of bits. In, say, ASCII coding, every letter gets the same number of bits no matter how frequently it is used. In Huffman coding, you could make "e" have fewer bits than an "X".
The trick in huffman coding is how the codes are prefixed. After reading each bit, the decoder knows, unambiguously, whether it has a value or needs to read another bit.
To comprehend the deflate process you need to understand LZ algorithm and Huffman coding.
On their own, both techniques are simple. The complexity comes from how they are put together.
LZ compresses by finding previous occurrences of a string. When a string has occurred previously, it is compressed by referencing the previous occurrence. The Distance is the offset to the previous occurrence. Distance and length specify that occurrence.
The problem is not with puff.
All the IDAT chunks in the png file need to be put together before calling puff.
It should look something like this:
BYTE LoadPng(PPNG PngFile, PIMAGE ImageData)
{
PDWORD Pixel = 0;
DWORD ChunkSize = 0;
PPNG_IHDR PngIhdr = (PPNG_IHDR) ((SIZE_PTR) &PngFile->FirstChunk + sizeof(PNG_CHUNK));
DWORD Png_Width = Png_ReadDword((PBYTE)&PngIhdr->Width);
DWORD Png_Height = Png_ReadDword((PBYTE)&PngIhdr->Height);
DWORD BufferSize = (Png_Width*Png_Height) * 8; //This just a guess right now, havent done the math yet. !!!
ChunkSize = Png_ReadDword((PBYTE)&PngFile->FirstChunk.Length);
PPNG_CHUNK ThisChunk = (PPNG_CHUNK) ((SIZE_PTR)&PngFile->FirstChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
PPNG_CHUNK NextChunk;
PBYTE UncompressedData = (PBYTE) malloc(BufferSize);
PBYTE TempBuffer = (PBYTE) malloc(BufferSize); //Put all idat chunks together befor uncompressing.
DWORD DeflateSize = 0; //All IDAT Chunks Added.
PZLIB iData = NULL;
PBYTE FirstBlock = NULL; //ponter to the first 3 bits of the deflate stuff.
INT RetValue = 0;
do
{
ChunkSize = Png_ReadDword((PBYTE)&ThisChunk->Length);
NextChunk = (PPNG_CHUNK) ((SIZE_PTR)ThisChunk + ChunkSize + 12); //12 is the length var itself, Chunktype and CRC.
if (Png_IsChunk(ThisChunk->ChuckType, "IDAT")) //Is IDAT ?
{
CopyMemory(&TempBuffer[DeflateSize], (PBYTE) ((SIZE_PTR)ThisChunk + 8), ChunkSize); //8 is the length and chunkType.
DeflateSize += ChunkSize;
}
ThisChunk = NextChunk;
} while (!Png_IsChunk(ThisChunk->ChuckType, "IEND"));
iData = (PZLIB) TempBuffer;
if ((iData->CMF & 8) == 8) //deflate compression method.
{
if ((iData->FLG & 0x20) == 0x20)
{
FirstBlock = (PBYTE) ((SIZE_PTR)iData + 6); //DICTID Present.
}
else FirstBlock = (PBYTE) ((SIZE_PTR)iData + 2); //DICTID Not present.
}
RetValue = puff(UncompressedData, &BufferSize, FirstBlock, &DeflateSize); //I belive chunksize should be fine.
if (RetValue != 0)
{
WCHAR ErrorText[100];
swprintf_s(ErrorText, 100, L"%u", RetValue);
MessageBox(NULL, ErrorText, NULL, MB_OK);
}
//LodePNGInfo ImageInfo;
//PBYTE Png_Real_Image = (PBYTE) malloc(BufferSize);
//ImageInfo.compression_method = PngIhdr->CompressionMethod;
//ImageInfo.filter_method = PngIhdr->FilterMethod;
//ImageInfo.interlace_method = PngIhdr->InterlaceMethod;
//ImageInfo.color.bitdepth = PngIhdr->BitDepth;
//ImageInfo.color.colortype = PngIhdr->ColourType;
//Remove Filter/crap blah blah.
//postProcessScanlines(Png_Real_Image, UncompressedData, Png_Width, Png_Height, &ImageInfo);
ImageData->Width = Png_Width;
ImageData->Height = Png_Height;
ImageData->Planes = 0; //Will need changed later.
ImageData->BitsPerPixel = 32; //Will need changed later.
ImageData->Pixels = 0;
//ImageData->Pixels = Png_Real_Image; //image not uncompressed yet.
return TRUE; //ret true for now. fix later.
}
You need to first read up on compression, since there is a lot of basic stuff that you're not getting. E.g. The Data Compression Book, by Nelson and Gailly.
Since it's a code, specifically a Huffman code, by definition the number of bits are variable.
If you don't know what the distance is for, then you need to first understand the LZ77 compression approach.
Lastly, aside from curiosity and self-education, there is no need for you to understand the deflate specification or to write your own inflate code. That's what zlib is for.
After understood (with some help...) how work the compress and uncompress functions of zlib library, I'm now trying to understand how deflate and inflate work. As far as i understand, compress is used in a single call, whereas deflate can be called several time.
Having a simple program with a Particle struct (coordinate x, y, z), I can deflate my datas without errors (getting a Z_STREAM_END response) and then inflate them with another z_stream object (Z_STREAM_END response too). But when I tried to display back my datas from the inflate response, I can get the x and y coordinate of my struct by not the third one (z).
I think it's due to a wrong parameters i gave to my z_stream object for inflate, but I can't find which one. As far as i understand reading docs and example, that's how I think z_stream works (this is just an example) :
// Here i give a total memory size for the output buffer used by deflate func
#define CHUNK 16384
struct Particle
{
float x;
float y;
float z;
};
...
// An element to get a single particule and give it to deflate func
Bytef *dataOriginal = (Bytef*)malloc( sizeof(Particle) );
// This var will be used to pass compressed data
Bytef *dataCompressed = (Bytef*)malloc( CHUNK );
z_stream strm;
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
deflateInit(&strm, Z_DEFAULT_COMPRESSION);
strm.avail_out = CHUNK;
strm.next_out = dataCompressed;
int nbrLoop = 2;
int spaceUsed = 0;
int flush;
Particle p;
for (var i = 0; i<nbrLoop; i++){
// set all values equals to 0
memset( &p, 0, sizeof(Particle) );
// insert some random values
p.x = (i+1) * 1;
p.y = (i+1) * 3;
p.z = (i+1) * 7;
//copy this values in a Bytef* elements
memcpy( dataOriginal, &p, sizeof(Particle) );
strm.avail_in = sizeof(dataOriginal);
strm.next_in = dataOriginal;
// If it's the last particle :
if(i == nbrLoop - 1){
flush = Z_FINISH;
}
else{
flush = Z_NO_FLUSH;
}
int response = deflate(&strm, flush);
// I don't get any errors here
// EDIT : Get Z_OK at first loop, the Z_STREAM_END at second (last)
if( res == Z_STREAM_END ){
spaceUsed = CHUNK - strm.avail_out;
}
}
deflateEnd(&strm);
// Trying to get back my datas
Bytef *decomp = (Bytef*)malloc( sizeof(Particle) );
z_stream strmInflate;
strmInflate.zalloc = Z_NULL;
strmInflate.zfree = Z_NULL;
strmInflate.opaque = Z_NULL;
inflateInit(&strmInflate);
// datas i want to get at the next inflate
strmInflate.avail_in = sizeof(Particle);
strmInflate.next_in = dataCompressed;
// Two particles were compressed, so i need to get back two
strmInflate.avail_out = sizeof(Particle) * 2;
strmInflate.next_out = decomp;
int response = inflate( &strmInflate, Z_NO_FLUSH );
// No error here,
// EDIT : Get Z_OK
inflateEnd( &strmInflate );
Particle testP;
memset( &testP, 0, sizeof(Particle) );
memcpy( &testP, decomp, sizeof(Particle) );
std::cout << testP.x << std::endl; // display 1 OK
std::cout << testP.y << std::endl; // display 3 OK
std::cout << testP.z << std::endl; // display 0 NOT OK
Moreover, i thought that calling inflate a second time will allow me to recover datas of my second particle that was created in my for loop but i can't retrieve it.
Thanks in advance for any help !
strmInflate.avail_in = sizeof(Particle); needs to be strmInflate.avail_in = spaceUsed; You have to provide inflate all of the data produced by deflate.
At the end you want to get Z_STREAM_END from inflate(), not Z_OK. Otherwise you have not decompressed the entire generated stream.
Note that per the documentation in zlib.h, you need to also set next_in and avail_in (to Z_NULL and 0 if you like) before calling inflateInit()
Depending on the size of the input and output buffers you will be using in the final application, you may need more loops to assure that deflate() and inflate() can finish their jobs. Please see the example of how to use zlib.