Load jpeg image texture with libjpeg from QByteArray - c++

Hi I have a QByteArray of a jpeg image which I obtained from a QNetworkReply. I see everywhere jpeg image read from file and decompressed like this,
FILE * infile;
......
if ((infile = fopen(filename, "rb")) == NULL)
{
fprintf(stderr, "can't open %s\n", filename);
exit(1);
}
jpeg_stdio_src(&cinfo, infile);
jpeg_read_header(&cinfo, 0);
jpeg_start_decompress(&cinfo);
Then
while (scan lines remain to be read)
jpeg_read_scanlines(...);
But how do I read it from the QByteArray instead of a file/stdio stream?

Use
void jpeg_mem_src(j_decompress_ptr cinfo, unsigned char * inbuffer,
unsigned long insize);
instead of jpeg_stdio_src
QByteArray qarr;
jpeg_decompress_struct cinfo;
jpeg_mem_src(&cinfo, qarr.data(), qarr.size());
jpeg_read_header(&cinfo, TRUE);
jpeg_start_decompress(&cinfo);
/// etc..

You don't need to use external jpeg library:
QByteArray array;
// read data into array here;
QPixmap image;
image.loadFromData(array);
should be enough. Qt will autodetect the image format. Just remember to distribute Qt jpeg plugin if you compile your application dynamically.

Related

avformat_open_input cannot open a file with raw opus audio data

I have a problem when trying to open a binary file containing raw audio data in opus format. When I try to open this file, the library returns an error: Unknown input format: opus.
How can I open this file ?
I need to open it and write all the raw audio data to an audio container. I understand that the opus format is intended only for encoding. I realized this using command:
$ ffmpeg -formats | grep Opus
ffmpeg version 3.4.4 Copyright (c) 2000-2018 the FFmpeg developers
E opus Ogg Opus # For only encoding
Then what format should I use to open this file ? With ogg ? I tried, but there are also problems with opening the outgoing file. I provide the code that shows only the necessary part to open the file:
int main(int argc, char *argv[])
{
// ...
av_register_all();
AVFormatContext *iFrmCtx = nullptr;
AVFormatContext *oFrmCtx = nullptr;
AVPacket packet;
const char *iFilename = "opus.bin"; // Raw audio data with `opus` format
const char *oFilename = "opus.mka"; // Audio file with `opus` audio format
AVDictionary* frmOpts = nullptr;
const qint32 smpRateErrorCode = av_dict_set_int(&frmOpts, "sample_rate", 8000, 0);
const qint32 bitRateErrorCode = av_dict_set_int(&frmOpts, "bit_rate", 64000, 0);
const qint32 channelErrorCode = av_dict_set_int(&frmOpts, "channels", 2, 0);
if (smpRateErrorCode < 0 ||
bitRateErrorCode < 0 ||
channelErrorCode < 0) {
return EXIT_FAILURE;
}
AVInputFormat *iFrm = av_find_input_format("opus"); // Error: Unknown input format
if (iFrm == nullptr) {
av_dict_free(&frmOpts);
return EXIT_FAILURE;
}
qint32 ret = 0;
if ((ret = avformat_open_input(&iFrmCtx, iFilename, iFrm, &frmOpts)) < 0) {
av_dict_free(&frmOpts);
return EXIT_FAILURE;
}
// We're doing something...
}
As said before Opus is not self-delimiting, it needs a container. and since you got the raw data from rtp payload, and the opus codecs is a dynamic codecs (with dynamic payload size) you can't use ffmpeg AVFormatContext to read raw data from the file.
But you can work around this issue and instead of using (av_read_frame) to fill the AVPacket to decode them you can manually fill the AVPacket data and size and then push it to the decoder.
note that you should also update the pts and dts for each AVPacket .

QT failed to load Image from Buffer

My work Environment : Qt 5.8 MSVC2015 64bit, QT GraphicsView, Windows 7 64 bit
I am loading image from buffer (a demon process is going send a image buffer), but it failed to create image with buffer.
QFile file("D:\\2.png");
if (!file.open(QFile::ReadOnly))
qDebug() << "Error failed to Open file";
QByteArray array = file.readAll();
array = array.toBase64();
QImage tempimage((uchar *)array.data(), 250, 250, QImage::Format_RGBX8888);
if (!tempimage.isNull()) {
///I always get this error
qDebug() << "Error!!! failed to create a image!";
}
Any idea what I am missing here ?
Why are you converting to base64?
Wait, where are you converting from PNG to an image plane?
Try bool QImage::loadFromData(const QByteArray &data, const char *format = Q_NULLPTR) to load the PNG instead of the CTor with the raw data.
If your wire format isn't PNG (and is in fact base64 encoded raw pixel data) then you want to convert FROM base64.
Thanks for all suggestion & help.
I fix my mistakes removed base64 conversion & loaded buffer using loadFromData with QByteArray reinterpret_cast:
Here is a final solution :
QFile file("D:\\2.png");
if (!file.open(QFile::ReadOnly))
qDebug() << "Error failed to Open file";
QByteArray array = file.readAll();
QImage tempimage;
//// This very important to cast in below format, QByteArray don't work as arguments.
tempimage.loadFromData(reinterpret_cast<const uchar *>(array.data()),array.size());
if (tempimage.isNull()) {
qDebug() << "Error!!! failed to create a image!";
}

libpng error invalid chunk type when loading png from memory

I'm trying to load a PNG from a memory buffer so I can access the ImageData without having to save it as a file first.
The memory buffer contains a valid png-file, when using fwrite to save it as a file on disk I get the following image: https://dl.dropboxusercontent.com/u/13077624/test.png
This represents a depth Image received by a Kinect sensor, for those of you wondering.
This is the code that gives errors:
struct mem_encode
{
char *buffer;
png_uint_32 size;
png_uint_32 current_pos;
};
void handle_data(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
cout<<"Saving as file: "<<determinePathExtension(PNGFrame,"png");
FILE* fp=fopen("test.png","wb");
fwrite(data_,bytes_transferred,1,fp);
fclose(fp);
//get PNG file info struct (memory is allocated by libpng)
png_structp png_ptr = NULL;
png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
if (!png_ptr) {
std::cerr << "ERROR: Couldn't initialize png read struct" << std::endl;
cin.get();
return; //Do your own error recovery/handling here
}
// get PNG image data info struct (memory is allocated by libpng)
png_infop info_ptr = NULL;
info_ptr = png_create_info_struct(png_ptr);
if (!info_ptr) {
std::cerr << "ERROR: Couldn't initialize png info struct" << std::endl;
cin.get();
png_destroy_read_struct(&png_ptr, (png_infopp)0, (png_infopp)0);
return; //Do your own error recovery/handling here
}
struct mem_encode pngdata;
pngdata.buffer=data_;
pngdata.size=(png_uint_32)bytes_transferred;
pngdata.current_pos=0;
png_set_read_fn(png_ptr,&pngdata, ReadData);
//Start reading the png header
png_set_sig_bytes(png_ptr, 8);
png_read_info(png_ptr,info_ptr);
//... Program crashes here
}
else
{
cout<<error.message()<<" Bytes received: "<<bytes_transferred<<endl;
delete this;
}
}
static void ReadData(png_structp png_ptr, png_bytep outBytes,
png_size_t byteCountToRead){
struct mem_encode* p=(struct mem_encode*)png_get_io_ptr(png_ptr);
size_t nsize=p->size + byteCountToRead;
if(byteCountToRead>(p->size-p->current_pos)) png_error(png_ptr,"read error in read_data_memory (loadpng)");
/* copy new bytes */
memcpy(outBytes,p->buffer + p->size,byteCountToRead);
p->current_pos+=byteCountToRead;
}
Calling the method results in the program crashing with the following error:
libpng error: [00][00][00][00]: invalid chunk type
data_ represents the the databuffer storing the PNG-image and is a char *.
Any help would be appreciated.
Sources I used:
http://www.libpng.org/pub/png/libpng-1.0.3-manual.html
http://blog.hammerian.net/2009/reading-png-images-from-memory/
http://santosdev.blogspot.be/2012/08/loading-png-image-with-libpng-1512-or.html
http://www.piko3d.net/tutorials/libpng-tutorial-loading-png-files-from-streams/
Could this be caused by network bytes being translated badly?
I think you forgot to read the PNG signature bytes. Use
if (png_sig_cmp(data, 0, 8)
png_error(png_ptr, "it's not a PNG file");
Then your
png_set_sig_bytes(png_ptr,8);
lets libpng know you have already read the signature.
Or, you could use png_set_sig_bytes(png_ptr, 0); and
let libpng do the checking for you.
Are you sure your ReadData function is correct? Why do you start memcpy from the address p->buffer + p->size - isn't it the end of the buffer? And what does nsize do?

Video for Windows - Adding Audio Stream to AVI

I have really simple program add the add an audio stream into an avi file with a pre-existing video stream.
The issue is that the resulting file contains a video stream but there does not appear to be any data in the stream.
The audio file is read by SDKwavefile from the DirectX samples.
AVIFileInit();
PAVIFILE avi;
AVIFileOpen(&avi, argv[1], OF_WRITE, NULL);
CWaveFile wav;
wav.Open(argv[2], NULL, WAVEFILE_READ);
WAVEFORMATEX *wavFormat = wav.GetFormat();
PAVISTREAM audioStream;
AVIFileCreateStream(avi, &audioStream, &audioInfo);
AVISTREAMINFO audioInfo;
memset(&audioInfo, 0, sizeof(AVISTREAMINFO));
audioInfo.fccType = streamtypeAUDIO;
audioInfo.dwScale = wavFormat->nBlockAlign;
audioInfo.dwRate = wavFormat->nSamplesPerSec * wavFormat->nBlockAlign;
audioInfo.dwSampleSize = wavFormat->nBlockAlign;
audioInfo.dwQuality = (DWORD)-1;
AVIStreamSetFormat(audioStream, 0, wavFormat, sizeof(WAVEFORMATEX));
BYTE *data = (BYTE *)malloc(wav.GetSize());
DWORD sizeRead;
wav.Read(data, wav.GetSize(), &sizeRead);
AVIStreamWrite(audioStream, 0, (wav.GetSize() * 8) / wavFormat->wBitsPerSample, data, wav.GetSize(), 0, NULL, NULL);
AVIStreamRelease(audioStream);
free(data);
wav.Close();
AVIFileRelease(avi);
AVIFileExit();
(Also, I know I shouldn't be using VFW anymore but that decision goes way above my head. And I know I'm not checking the results of anything, that can come later.)
Thanks.
I tried to use this to add a .wav to an existing .avi (although I had a class CWaveSoundRead).
If you check the return codes, you get to AVIStreamWrite() which returns 0x80044065, which turns out to be AVIERR_UNSUPPORTED.
In hindsight, I'd say you called AVIFileCreateStream() before you filled in the AVISTREAMINFO object. Actually, now that I see it, it's hard to imagine your code compiling as-is, since audioInfo is defined AFTER AVIFileCreateStream!
Here's something I did, although it still mistakes the audio stream length:
struct FmtChunk {
char id[4]; //="fmt "
unsigned long size; //=16 or 0x28
short wFormatTag; //=WAVE_FORMAT_PCM=1
unsigned short wChannels; //=1 or 2 for mono or stereo
unsigned long dwSamplesPerSec; //=11025 or 22050 or 44100
unsigned long dwAvgBytesPerSec; //=wBlockAlign * dwSamplesPerSec
unsigned short wBlockAlign; //=wChannels * (wBitsPerSample==8?1:2)
unsigned short wBitsPerSample; //=8 or 16, for bits per sample
};
struct DataChunk {
char id[4]; //="data"
unsigned long size; //=datsize, size of the following array
unsigned char data[1]; //=the raw data goes here
};
struct WavChunk {
char id[4]; //="RIFF"
unsigned long size; //=datsize+8+16+4
char type[4]; //="WAVE"
};
bool Q_AVI_AddWav(cstring fnameVideo,cstring fnameAudio)
// Adds a .wav file to an existing .avi (with video stream)
{
IAVIStream* m_pStreamAudio=0;
HRESULT hr;
AVIFileInit();
PAVIFILE avi;
hr=AVIFileOpen(&avi, fnameVideo,OF_WRITE,NULL);
CHECK(hr,"AVIFileOpen");
WavChunk wav;
FmtChunk fmt;
DataChunk dat;
//read wav file
FILE *fr;
int pos;
fr=qfopen(fnameAudio,"rb");
// Read header
fread(&wav,1,sizeof(wav),fr);
// Read 'fmt' chunk; may be 16 or 40 in length
pos=ftell(fr);
fread(&fmt,1,sizeof(fmt),fr);
if(fmt.size==40)fseek(fr,40-16,SEEK_CUR); // Skip rest of fmt
// else it's ok
// Read data specs
fread(&dat,sizeof(dat),1,fr);
char *buf = new char[dat.size];
qdbg("Wav data %d bytes\n",dat.size);
fread(buf,1,dat.size,fr);
qfclose(fr);
// set wave format info
WAVEFORMATEX wfx;
wfx.wFormatTag=fmt.wFormatTag;
wfx.cbSize=0;
wfx.nAvgBytesPerSec=fmt.dwAvgBytesPerSec;
wfx.nBlockAlign=fmt.wBlockAlign;
wfx.nChannels=fmt.wChannels;
wfx.nSamplesPerSec=fmt.dwSamplesPerSec;
wfx.wBitsPerSample=fmt.wBitsPerSample;
// create audio stream
AVISTREAMINFO ahdr; ZeroMemory(&ahdr,sizeof(ahdr));
ahdr.fccType=streamtypeAUDIO;
ahdr.dwScale=wfx.nBlockAlign;
ahdr.dwRate=wfx.nSamplesPerSec*wfx.nBlockAlign;
ahdr.dwSampleSize=wfx.nBlockAlign;
ahdr.dwQuality=(DWORD)-1;
hr=AVIFileCreateStream(avi, &m_pStreamAudio, &ahdr);
CHECK(hr,"AVIFileCreateStream");
if(hr!=AVIERR_OK) {if (buf) QDELETE_ARRAY(buf); /*delete[] buf;*/ return false;}
hr = AVIStreamSetFormat(m_pStreamAudio,0,&wfx,sizeof(WAVEFORMATEX));
CHECK(hr,"AVIStreamSetFormat");
if(hr!=AVIERR_OK) {if (buf) QDELETE_ARRAY(buf); /*delete[] buf;*/ return false;}
//write audio stream
unsigned long numbytes = dat.size;
unsigned long numsamps = fmt.wChannels*numbytes*8 / wfx.wBitsPerSample;
hr = AVIStreamWrite(m_pStreamAudio,0,numsamps,buf,numbytes,0,0,0);
CHECK(hr,"AVIStreamWrite");
qdbg("Write numsamps %d, numbytes %d\n",numsamps,numbytes);
QDELETE_ARRAY(buf); //if(buf)delete[] buf;
// Release audio stream
AVIStreamRelease(m_pStreamAudio);
// Close AVI
hr=AVIFileRelease(avi);
CHECK(hr,"AVIFileRelease");
// Close VFW
AVIFileExit();
return hr==AVIERR_OK;
}

How do I get the DC coefficient from a jpg using the jpg library?

I am new to this stuff, but I need to get the dc-coefficient from a jpeg using the jpeg library?
I was told as a hint that the corresponding function is in jdhuff.c, but I can't find it. I tried to find a decent article about the jpg library where I can get this, but no success so far.
So I hope you guys can help me a bit and point me to either some documentation or have a hint.
So, here is what I know:
A jpg picture consists of 8x8 Blocks. That are 64 Pixels. 63 of it are named AC and 1 is named DC. Thats the coefficient. The position is at array[0][0].
But how do I exactly read that with the jpg library? I am using C++.
edit:
This is what I have so far:
read_jpeg::read_jpeg( const std::string& filename )
{
FILE* fp = NULL; // File-Pointer
jpeg_decompress_struct cinfo; // jpeg decompression parameters
JSAMPARRAY buffer; // Output row-buffer
int row_stride = 0; // physical row width
my_error_mgr jerr; // Custom Error Manager
// Set Error Manager
cinfo.err = jpeg_std_error(&jerr.pub);
jerr.pub.error_exit = my_error_exit;
// Handle longjump
if (setjmp(jerr.setjmp_buffer)) {
// JPEG has signaled an error. Clean up and throw an exception.
jpeg_destroy_decompress(&cinfo);
fclose(fp);
throw std::runtime_error("Error: jpeg has reported an error.");
}
// Open the file
if ( (fp = fopen(filename.c_str(), "rb")) == NULL )
{
std::stringstream ss;
ss << "Error: Cannot read '" << filename.c_str() << "' from the specified location!";
throw std::runtime_error(ss.str());
}
// Initialize jpeg decompression
jpeg_create_decompress(&cinfo);
// Show jpeg where to read the data
jpeg_stdio_src(&cinfo, fp);
// Read the header
jpeg_read_header(&cinfo, TRUE);
// Decompress the file
jpeg_start_decompress(&cinfo);
// JSAMPLEs per row in output buffer
row_stride = cinfo.output_width * cinfo.output_components;
// Make a one-row-high sample array
buffer = (*cinfo.mem->alloc_sarray)((j_common_ptr) &cinfo, JPOOL_IMAGE, row_stride, 1);
// Read image using jpgs counter
while (cinfo.output_scanline < cinfo.output_height)
{
// Read the image
jpeg_read_scanlines(&cinfo, buffer, 1);
}
// Finish the decompress
jpeg_finish_decompress(&cinfo);
// Release memory
jpeg_destroy_decompress(&cinfo);
// Close the file
fclose(fp);
}
This is not possible using the standard API. With libjpeg API the closest you can get is raw pixel data of Y/Cb/Cr channels.
To get coefficients' data you'd need to hack the decode_mcu function (or its callers) to save the data decoded there.