Ffmpeg decoder yuv420p - c++

I work on a video player yuv420p with ffmpeg but it's not working and i can't find out why. I spend the whole week on it...
So i have a test which just decode some frame and read it, but the output always differ, and it's really weird.
I use a video (mp4 yuv420p) which color one black pixel in more each frame :
For the video, put http://sendvid.com/b1sgf8r1 on a website like http://www.telechargerunevideo.com/en/
VideoContext is just a little struct:
struct VideoContext {
unsigned int currentFrame;
std::size_t size;
int width;
int height;
bool pause;
AVFormatContext* formatCtx;
AVCodecContext* codecCtxOrig;
AVCodecContext* codecCtx;
int streamIndex;
};
So i have a function to count the number of black pixels:
std::size_t checkFrameNb(const AVFrame* frame) {
std::size_t nb = 0;
for (int y = 0; y < frame->height; ++y) {
for (int x = 0 ; x < frame->width; ++x) {
if (frame->data[0][(y * frame->linesize[0]) + x] == BLACK_FRAME.y
&& frame->data[1][(y / 2 * frame->linesize[1]) + x / 2] == BLACK_FRAME.u
&& frame->data[2][(y / 2 * frame->linesize[2]) + x / 2] == BLACK_FRAME.v)
++nb;
}
}
return nb;
}
And this is how i decode one frame:
const AVFrame* VideoDecoder::nextFrame(entities::VideoContext& context) {
int frameFinished;
AVPacket packet;
// Allocate video frame
AVFrame* frame = av_frame_alloc();
if(frame == nullptr)
throw;
// Initialize frame->linesize
avpicture_fill((AVPicture*)frame, nullptr, AV_PIX_FMT_YUV420P, context.width, context.height);
while(av_read_frame(context.formatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if(packet.stream_index == context.streamIndex) {
// Decode video frame
avcodec_decode_video2(context.codecCtx, frame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
++context.currentFrame;
return frame;
}
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
throw core::GlobalException("nextFrame", "Frame decode failed");
}
There is already something wrong?
Maybe the context initialization will be useful:
entities::VideoContext VideoLoader::loadVideoContext(const char* file,
const int width,
const int height) {
entities::VideoContext context;
// Register all formats and codecs
av_register_all();
context.formatCtx = avformat_alloc_context();
// Open video file
if(avformat_open_input(&context.formatCtx, file, nullptr, 0) != 0)
throw; // Couldn't open file
// Retrieve stream information
if(avformat_find_stream_info(context.formatCtx, nullptr) > 0)
throw; // Couldn't find stream information
// Dump information about file onto standard error
//av_dump_format(m_formatCtx, 0, file, 1);
// Find the first video stream because we don't need more
for(unsigned int i = 0; i < context.formatCtx->nb_streams; ++i)
if(context.formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
context.streamIndex = i;
context.codecCtx = context.formatCtx->streams[i]->codec;
break;
}
if(context.codecCtx == nullptr)
throw; // Didn't find a video stream
// Find the decoder for the video stream
AVCodec* codec = avcodec_find_decoder(context.codecCtx->codec_id);
if(codec == nullptr)
throw; // Codec not found
// Copy context
if ((context.codecCtxOrig = avcodec_alloc_context3(codec)) == nullptr)
throw;
if(avcodec_copy_context(context.codecCtxOrig, context.codecCtx) != 0)
throw; // Error copying codec context
// Open codec
if(avcodec_open2(context.codecCtx, codec, nullptr) < 0)
throw; // Could not open codec
context.currentFrame = 0;
decoder::VideoDecoder::setVideoSize(context);
context.pause = false;
context.width = width;
context.height = height;
return std::move(context);
}
I know it's not a little piece of code, if you have any idea too make an exemple more brief, go on.
And if someone have an idea about this issue, there is my output:
9 - 10 - 12 - 4 - 10 - 14 - 11 - 8 - 9 - 10
But i want :
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10
PS:
get fps and video size are copy paste code of opencv

Related

OpenCV VideoWriter C++ not Writing to Output.avi

I'm attempting to write code that takes a frames and writes to an output file .avi / .mp4 using cv VideoWriter in C++.
System setup is Linux Ubuntu 18.04.4 LTS, Qt 5.9.5 (GCC 7.3.0, 64 bit)
I'm getting frames from camera, convert them to QImage and display them on GUI well.
void FrameThread::run()
{
while (m_isWaitting)
{
if (SUCCESS == Buf_WaitForFrame(m_Cam, &m_frame))
{
int channels = m_frame.ucChannels;
int width = m_frame.usWidth;
int height = m_frame.usHeight;
int elementBytes = m_frame.ucElemBytes;
QImage img;
img = QImage(width, height, QImage::Format_Grayscale8);
uchar *pSrc = (uchar *)m_frame.pBuffer + m_frame.usHeader;
uchar *pDst = (uchar *)img.bits();
if (2 == elementBytes)
{
pSrc += (elementBytes / 2);
if (1 == channels)
{
int pixels = width * height * channels;
for (int i = 0; i < pixels; ++i)
{
*pDst++ = *pSrc;
pSrc += elementBytes;
}
}
}
emit signalUpdateImage(img);
if (m_isSaving)
{
saveImage();
}
if (m_isRecording)
{
if(!m_isCreateVideoWriter)
{
QString savedPath = "/home/nvidia/Pictures/";
CvSize size = cvSize(2048, 1148);
char cur[1024];
timespec time;
clock_gettime(CLOCK_REALTIME, &time);
tm nowtm;
localtime_r(&time.tv_sec, &nowtm);
sprintf(cur, "%s%04d%02d%02d%02d%02d%02d.avi", savedPath.toLocal8Bit().data(), nowtm.tm_year+1900, nowtm.tm_mon+1, nowtm.tm_mday, nowtm.tm_hour, nowtm.tm_min, nowtm.tm_sec);
video = cv::VideoWriter(cur,cv::VideoWriter::fourcc('M','J','P','G'),25.0, size, 1);
m_isCreateVideoWriter = true;
}
else
{
//Clicked 'Stop Record' button
//Close video writer
if (CamObject::getInstance()->m_bFinishRec)
{
video.release();
m_isRecording = false;
m_isCreateVideoWriter = false;
return;
}
// Recording...
else
{
videoFrame = QImageToCvMat(img);
video.write(videoFrame);
}
}
}
}
}
}
Also I created method QImageToCvMat because their is not write method with QImage :
cv::Mat QImageToCvMat(QImage &inImage)
{
cv::Mat mat(inImage.height(), inImage.width(),
CV_8UC1, inImage.bits(), inImage.bytesPerLine());
return mat;
}
The code failing to produce output and program collapse on line
video.write(videoFrame);
Any one familiar with this issue or can kindly give advice? Other way to create video file?

Legacy Print driver is fuzzy

We have an old Print driver which takes a document and sends PCL to the spooler. The client then processes this PCL and displays everything as TIFF. Our users have been complaining that the TIFF is fuzzy and the image is not sharp. I am given this stupid task of solving the mystery
Is the PCL itself bad. I don't have enough knowledge about a PCL and if it has resolution. How do I trap the output of a driver that's being sent to the spooler?
Or is it the client that is somehow not rendering the PCL with a good resolution. Do I need to go through the pain of learning how to debug this driver. I will but is it going to help me fix a resolution issue. I have never done driver development so its going to be a curve for me. But if I need to do then thats ok. Where should I start? Is the PCL thats bad or the client that converts PCL to bitmap bad?
This is the C++ code
BOOL APIENTRY
CreatePCLRasterGraphicPage(
SURFOBJ *pso,
BOOL firstPage,
char *pageText
)
/*++
Routine Description:
Creates standard PCL end-of-document lines.
Arguments:
SURFOBJ - Surface Object
BOOL - First Page ?
char * Page Text
Return Value:
BOOL - True if successful
--*/
{
PDEVOBJ pDevObj = (PDEVOBJ)pso->dhpdev;
POEMPDEV pOemPDEV = (POEMPDEV)pDevObj->pdevOEM;
DWORD dwOffset = 0;
DWORD dwWritten = 0;
DWORD dwPageBufferSize = 0;
int i = 0;
ULONG n = 0;
BYTE bitmapRow[1050];
BYTE compRow[2100];
DWORD dwRowSize = 0;
DWORD dwCompPCLBitmapSize = 0;
//wchar_t traceBuff[256];
pOemPDEV->dwCompBitmapBufSize = 0;
// TRACE OUT ----------------------------------------------------
//ZeroMemory(traceBuff, 256);
//StringCchPrintf(traceBuff, 256, L"Top of CreatePCLRasterGraphicPage");
//WriteTraceLine(traceBuff);
// -----------------------------------------------------------------
// Invert color
for (n = 0; n < pso->cjBits; n++)
*(((PBYTE &)pso->pvBits) + n) ^= 0xFF;
// compress each row and store in a buffer with PCL line headings
for (i = 0; i < pso->sizlBitmap.cy; i++) {
// Zero Memory hack for bottom of form black line
if (*(((PBYTE &)pso->pvScan0) + (i * pso->lDelta) + 319) == 0xFF)
ZeroMemory(((PBYTE &)pso->pvScan0) + (i * pso->lDelta), 320);
// Copy the bitmap scan line into bitmapRow and send them off to be compressed
ZeroMemory(bitmapRow, 1050);
ZeroMemory(compRow, 2100);
MoveMemory(bitmapRow, ((PBYTE &)pso->pvScan0) + (i * pso->lDelta), pso->lDelta);
dwRowSize = CompressBitmapRow(compRow, bitmapRow, pso->lDelta);
// Create PCL Row Heading
char bufPCLLineHead[9];
StringCchPrintfA(bufPCLLineHead, 9, "%c%s%d%s", 27, "*b", dwRowSize, "W");
if ((dwCompPCLBitmapSize + dwRowSize + strlen(bufPCLLineHead))
> pOemPDEV->dwCompBitmapBufSize) {
if (!GrowCompBitmapBuf(pOemPDEV)) {
//ZeroMemory(traceBuff, 256);
//StringCchPrintf(traceBuff, 256,
// L"Compressed bitmap buffer could not allocate more memory.");
//WriteTraceLine(traceBuff);
}
}
if (pOemPDEV->pCompBitmapBufStart) {
// write the PCL line heading to the buffer
MoveMemory(pOemPDEV->pCompBitmapBufStart + dwCompPCLBitmapSize,
bufPCLLineHead, strlen(bufPCLLineHead));
dwCompPCLBitmapSize += strlen(bufPCLLineHead);
// write the compressed row to the buffer
MoveMemory(pOemPDEV->pCompBitmapBufStart + dwCompPCLBitmapSize,
compRow, dwRowSize);
dwCompPCLBitmapSize += dwRowSize;
}
}
// Calculate size and create buffer
dwPageBufferSize = 21;
if (!firstPage)
dwPageBufferSize++;
bGrowBuffer(pOemPDEV, dwPageBufferSize);
// Add all Raster Header Lines
if (!firstPage)
{
// Add a Form Feed
char bufFormFeed[2];
StringCchPrintfA(bufFormFeed, 2, "%c", 12); // 1 char
MoveMemory(pOemPDEV->pBufStart + dwOffset, bufFormFeed, 2);
dwOffset += 1;
}
// Position cursor at X0, Y0
char bufXY[8];
StringCchPrintfA(bufXY, 8, "%c%s", 27, "*p0x0Y"); // 7 chars
MoveMemory(pOemPDEV->pBufStart + dwOffset, bufXY, 8);
dwOffset += 7;
// Start Raster Graphics
char bufStartRas[6];
StringCchPrintfA(bufStartRas, 6, "%c%s", 27, "*r1A"); // 5 chars
MoveMemory(pOemPDEV->pBufStart + dwOffset, bufStartRas, 6);
dwOffset += 5;
// Raster Encoding - Run-Length Encoding
char bufRasEncoding[6];
StringCchPrintfA(bufRasEncoding, 6, "%c%s", 27, "*b1M"); // 5 chars
MoveMemory(pOemPDEV->pBufStart + dwOffset, bufRasEncoding, 6);
dwOffset += 5;
// Write out bitmap header PCL
dwWritten = pDevObj->pDrvProcs->DrvWriteSpoolBuf(pDevObj, pOemPDEV->pBufStart, dwPageBufferSize);
// Write out PCL plus compressed bitmap bytes
dwWritten = pDevObj->pDrvProcs->DrvWriteSpoolBuf(pDevObj, pOemPDEV->pCompBitmapBufStart, dwCompPCLBitmapSize);
// End Raster Graphics
char bufEndRas[5];
StringCchPrintfA(bufEndRas, 5, "%c%s", 27, "*rB"); // 4 chars
MoveMemory(pOemPDEV->pBufStart + dwOffset, bufEndRas, 5);
// Write out PCL end bitmap
dwWritten = pDevObj->pDrvProcs->DrvWriteSpoolBuf(pDevObj, bufEndRas, 4);
// Free Compressed Bitmap Memory
if (pOemPDEV->pCompBitmapBufStart) {
MemFree(pOemPDEV->pCompBitmapBufStart);
pOemPDEV->pCompBitmapBufStart = NULL;
pOemPDEV->dwCompBitmapBufSize = 0;
dwPageBufferSize = 0;
}
// Free Memory
vFreeBuffer(pOemPDEV);
// Write Page Text to the spooler
size_t charCount = 0;
StringCchLengthA(pageText, 32767, &charCount);
char bufWriteText[15];
ZeroMemory(bufWriteText, 15);
StringCchPrintfA(bufWriteText, 15, "%c%s%d%s", 27, "(r", charCount, "W");
dwWritten = pDevObj->pDrvProcs->DrvWriteSpoolBuf(pDevObj, bufWriteText, strlen(bufWriteText));
dwWritten = pDevObj->pDrvProcs->DrvWriteSpoolBuf(pDevObj, pageText, charCount);
return TRUE;
}
BOOL
GrowCompBitmapBuf(
POEMPDEV pOemPDEV
)
/*++
Routine Description:
Grows memory by 1000 bytes (per call) to hold compressed
bitmap and PCL data.
Arguments:
POEMPDEV - Pointer to the private PDEV structure
Return Value:
BOOL - True is successful
--*/
{
DWORD dwOldBufferSize = 0;
PBYTE pNewBuffer = NULL;
dwOldBufferSize = pOemPDEV->pCompBitmapBufStart ? pOemPDEV->dwCompBitmapBufSize : 0;
pOemPDEV->dwCompBitmapBufSize = dwOldBufferSize + 4096;
pNewBuffer = (PBYTE)MemAlloc(pOemPDEV->dwCompBitmapBufSize);
if (pNewBuffer == NULL) {
MemFree(pOemPDEV->pCompBitmapBufStart);
pOemPDEV->pCompBitmapBufStart = NULL;
pOemPDEV->dwCompBitmapBufSize = 0;
return FALSE;
}
if (pOemPDEV->pCompBitmapBufStart) {
CopyMemory(pNewBuffer, pOemPDEV->pCompBitmapBufStart, dwOldBufferSize);
MemFree(pOemPDEV->pCompBitmapBufStart);
pOemPDEV->pCompBitmapBufStart = pNewBuffer;
}
else {
pOemPDEV->pCompBitmapBufStart = pNewBuffer;
}
return TRUE;
}
RLE encoding (I have not had a chance to add this yet). I was looking at different forums as how the code should like and this is what I came up with. I will add, test the document and update the post
public virtual sbyte[] decompressRL(sbyte[] data, int startOffset, int width, int count)
{
/*type 1 compression*/
int dataCount = count;
List<sbyte> decompressed = new List<sbyte>();
int numberOfDecompressedBytes = 0;
int dataStartOffset = startOffset;
while (dataCount-- > 0)
{
int cntrlByte = (int) data[dataStartOffset++];
// Repeated pattern
int val = data[dataStartOffset++];
dataCount--;
while (cntrlByte >= 0)
{
decompressed.Insert(numberOfDecompressedBytes++, new sbyte?((sbyte) val));
cntrlByte--;
}
}
mMaxUncompressedByteCount = numberOfDecompressedBytes;
return getBytes(decompressed);
}
This is how fuzzy the users claim that the image looks. This is when printed from a word document to the driver. The original is very clear.

Record Lowest quality possible with OpenAL?

I'm currently using these settings with OpenAL and recording from a Mic:
BUFFERSIZE 4410
FREQ 22050 // Sample rate
CAP_SIZE 10000 // How much to capture at a time (affects latency)
AL_FORMAT_MONO16
Is it possible to go lower in recording quality? I've tried reducing the sample rate but the end result is a faster playback speed.
Alright, so this is some of the most hacky code I've ever written, and I truly hope no one in their right mind ever uses it in production... just sooooo many bad things.
But to answer your question, I've been able to get the quality down to 8bitMono recording at 11025. However, everything I've recorded from my mic comes with significant amounts of static, and I'm not entirely sure I know why. I've generated 8bit karplus-strong string plucks that sound fantastic, so it could just be my recording device.
#include <AL/al.h>
#include <AL/alc.h>
#include <conio.h>
#include <stdio.h>
#include <vector>
#include <time.h>
void sleep( clock_t wait )
{
clock_t goal;
goal = wait + clock();
while( goal > clock() )
;
}
#define BUFFERSIZE 4410
const int SRATE = 11025;
int main()
{
std::vector<ALchar> vBuffer;
ALCdevice *pDevice = NULL;
ALCcontext *pContext = NULL;
ALCdevice *pCaptureDevice;
const ALCchar *szDefaultCaptureDevice;
ALint iSamplesAvailable;
ALchar Buffer[BUFFERSIZE];
ALint iDataSize = 0;
ALint iSize;
// NOTE : This code does NOT setup the Wave Device's Audio Mixer to select a recording input
// or a recording level.
pDevice = alcOpenDevice(NULL);
pContext = alcCreateContext(pDevice, NULL);
alcMakeContextCurrent(pContext);
printf("Capture Application\n");
if (pDevice == NULL)
{
printf("Failed to initialize OpenAL\n");
//Shutdown code goes here
return 0;
}
// Check for Capture Extension support
pContext = alcGetCurrentContext();
pDevice = alcGetContextsDevice(pContext);
if (alcIsExtensionPresent(pDevice, "ALC_EXT_CAPTURE") == AL_FALSE){
printf("Failed to detect Capture Extension\n");
//Shutdown code goes here
return 0;
}
// Get list of available Capture Devices
const ALchar *pDeviceList = alcGetString(NULL, ALC_CAPTURE_DEVICE_SPECIFIER);
if (pDeviceList){
printf("\nAvailable Capture Devices are:-\n");
while (*pDeviceList)
{
printf("%s\n", pDeviceList);
pDeviceList += strlen(pDeviceList) + 1;
}
}
// Get the name of the 'default' capture device
szDefaultCaptureDevice = alcGetString(NULL, ALC_CAPTURE_DEFAULT_DEVICE_SPECIFIER);
printf("\nDefault Capture Device is '%s'\n\n", szDefaultCaptureDevice);
pCaptureDevice = alcCaptureOpenDevice(szDefaultCaptureDevice, SRATE, AL_FORMAT_MONO8, BUFFERSIZE);
if (pCaptureDevice)
{
printf("Opened '%s' Capture Device\n\n", alcGetString(pCaptureDevice, ALC_CAPTURE_DEVICE_SPECIFIER));
// Start audio capture
alcCaptureStart(pCaptureDevice);
// Wait for any key to get pressed before exiting
while (!_kbhit())
{
// Release some CPU time ...
sleep(1);
// Find out how many samples have been captured
alcGetIntegerv(pCaptureDevice, ALC_CAPTURE_SAMPLES, 1, &iSamplesAvailable);
printf("Samples available : %d\r", iSamplesAvailable);
// When we have enough data to fill our BUFFERSIZE byte buffer, grab the samples
if (iSamplesAvailable > (BUFFERSIZE / 2))
{
// Consume Samples
alcCaptureSamples(pCaptureDevice, Buffer, BUFFERSIZE / 2);
// Write the audio data to a file
//fwrite(Buffer, BUFFERSIZE, 1, pFile);
for(int i = 0; i < BUFFERSIZE / 2; i++){
vBuffer.push_back(Buffer[i]);
}
// Record total amount of data recorded
iDataSize += BUFFERSIZE / 2;
}
}
// Stop capture
alcCaptureStop(pCaptureDevice);
// Check if any Samples haven't been consumed yet
alcGetIntegerv(pCaptureDevice, ALC_CAPTURE_SAMPLES, 1, &iSamplesAvailable);
while (iSamplesAvailable)
{
if (iSamplesAvailable > (BUFFERSIZE / 2))
{
alcCaptureSamples(pCaptureDevice, Buffer, BUFFERSIZE / 2);
for(int i = 0; i < BUFFERSIZE/2; i++){
vBuffer.push_back(Buffer[i]);
}
iSamplesAvailable -= (BUFFERSIZE / 2);
iDataSize += BUFFERSIZE;
}
else
{
//TODO::Fix
alcCaptureSamples(pCaptureDevice, Buffer, iSamplesAvailable);
for(int i = 0; i < BUFFERSIZE/2; i++){
vBuffer.push_back(Buffer[i]);
}
iDataSize += iSamplesAvailable * 2;
iSamplesAvailable = 0;
}
}
alcCaptureCloseDevice(pCaptureDevice);
}
//TODO::Make less hacky
ALuint bufferID; // The OpenAL sound buffer ID
ALuint sourceID; // The OpenAL sound source
// Create sound buffer and source
alGenBuffers(1, &bufferID);
alGenSources(1, &sourceID);
alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f);
alSource3f(sourceID, AL_POSITION, 0.0f, 0.0f, 0.0f);
alBufferData(bufferID, AL_FORMAT_MONO8, &vBuffer[0], static_cast<ALsizei>(vBuffer.size()), SRATE);
// Attach sound buffer to source
alSourcei(sourceID, AL_BUFFER, bufferID);
// Finally, play the sound!!!
alSourcePlay(sourceID);
printf("Press any key to continue...");
getchar();
return 0;
}
As you can see from:
alBufferData(bufferID, AL_FORMAT_MONO8, &vBuffer[0], static_cast<ALsizei>(vBuffer.size()), SRATE);
I've verified that this is the case. For demonstration code I'm okay throwing this example out there, but I wouldn't ever use it in production.
I'm not sure but for me FREQ is the output frequency but not the sample rate.
define sampling-rate 48000
see this link : http://supertux.lethargik.org/wiki/OpenAL_Configuration

Ffmpeg C++: Seeking to Frame 0

I have been successfully using:
avformat_seek_file(avFormatContext_, streamIndex_, 0, frame, frame, AVSEEK_FLAG_FRAME)
This along with the example code included at the bottom has allowed me to seek to specific iframes in my videos and read frames from there until I reach the frame I want.
The problem is the video files I am using have forward and backward interpolation so the first keyframe is not at frame 0, but something like frame 8.
What I am looking for is a way to seek to the frames that exist before the first B-Frame in my video files. Any help would be appreciated.
seekFrame:
bool QVideoDecoder::seekFrame(int64_t frame)
{
if(!ok)
return false;
//printf("**** seekFrame to %d. LLT: %d. LT: %d. LLF: %d. LF: %d. LastFrameOk: %d\n",(int)frame,LastLastFrameTime,LastFrameTime,LastLastFrameNumber,LastFrameNumber,(int)LastFrameOk);
// Seek if:
// - we don't know where we are (Ok=false)
// - we know where we are but:
// - the desired frame is after the last decoded frame (this could be optimized: if the distance is small, calling decodeSeekFrame may be faster than seeking from the last key frame)
// - the desired frame is smaller or equal than the previous to the last decoded frame. Equal because if frame==LastLastFrameNumber we don't want the LastFrame, but the one before->we need to seek there
if( (LastFrameOk==false) || ((LastFrameOk==true) && (frame<=LastLastFrameNumber || frame>LastFrameNumber) ) )
{
//printf("\t avformat_seek_file\n");
if(ffmpeg::avformat_seek_file(pFormatCtx,videoStream,0,frame,frame,AVSEEK_FLAG_FRAME)<0)
return false;
avcodec_flush_buffers(pCodecCtx);
DesiredFrameNumber = frame;
LastFrameOk=false;
}
//printf("\t decodeSeekFrame\n");
return decodeSeekFrame(frame);
return true;
}
decodeSeekFrame:
bool QVideoDecoder::decodeSeekFrame(int after)
{
if(!ok)
return false;
//printf("decodeSeekFrame. after: %d. LLT: %d. LT: %d. LLF: %d. LF: %d. LastFrameOk: %d.\n",after,LastLastFrameTime,LastFrameTime,LastLastFrameNumber,LastFrameNumber,(int)LastFrameOk);
// If the last decoded frame satisfies the time condition we return it
//if( after!=-1 && ( LastDataInvalid==false && after>=LastLastFrameTime && after <= LastFrameTime))
if( after!=-1 && ( LastFrameOk==true && after>=LastLastFrameNumber && after <= LastFrameNumber))
{
// This is the frame we want to return
// Compute desired frame time
ffmpeg::AVRational millisecondbase = {1, 1000};
DesiredFrameTime = ffmpeg::av_rescale_q(after,pFormatCtx->streams[videoStream]->time_base,millisecondbase);
//printf("Returning already available frame %d # %d. DesiredFrameTime: %d\n",LastFrameNumber,LastFrameTime,DesiredFrameTime);
return true;
}
// The last decoded frame wasn't ok; either we need any new frame (after=-1), or a specific new frame with time>after
bool done=false;
while(!done)
{
// Read a frame
if(av_read_frame(pFormatCtx, &packet)<0)
return false; // Frame read failed (e.g. end of stream)
//printf("Packet of stream %d, size %d\n",packet.stream_index,packet.size);
if(packet.stream_index==videoStream)
{
// Is this a packet from the video stream -> decode video frame
int frameFinished;
avcodec_decode_video2(pCodecCtx,pFrame,&frameFinished,&packet);
//printf("used %d out of %d bytes\n",len,packet.size);
//printf("Frame type: ");
//if(pFrame->pict_type == FF_B_TYPE)
// printf("B\n");
//else if (pFrame->pict_type == FF_I_TYPE)
// printf("I\n");
//else
// printf("P\n");
/*printf("codecctx time base: num: %d den: %d\n",pCodecCtx->time_base.num,pCodecCtx->time_base.den);
printf("formatctx time base: num: %d den: %d\n",pFormatCtx->streams[videoStream]->time_base.num,pFormatCtx->streams[videoStream]->time_base.den);
printf("pts: %ld\n",pts);
printf("dts: %ld\n",dts);*/
// Did we get a video frame?
if(frameFinished)
{
ffmpeg::AVRational millisecondbase = {1, 1000};
int f = packet.dts;
int t = ffmpeg::av_rescale_q(packet.dts,pFormatCtx->streams[videoStream]->time_base,millisecondbase);
if(LastFrameOk==false)
{
LastFrameOk=true;
LastLastFrameTime=LastFrameTime=t;
LastLastFrameNumber=LastFrameNumber=f;
}
else
{
// If we decoded 2 frames in a row, the last times are okay
LastLastFrameTime = LastFrameTime;
LastLastFrameNumber = LastFrameNumber;
LastFrameTime=t;
LastFrameNumber=f;
}
//printf("Frame %d # %d. LastLastT: %d. LastLastF: %d. LastFrameOk: %d\n",LastFrameNumber,LastFrameTime,LastLastFrameTime,LastLastFrameNumber,(int)LastFrameOk);
// Is this frame the desired frame?
if(after==-1 || LastFrameNumber>=after)
{
// It's the desired frame
// Convert the image format (init the context the first time)
int w = pCodecCtx->width;
int h = pCodecCtx->height;
img_convert_ctx = ffmpeg::sws_getCachedContext(img_convert_ctx,w, h, pCodecCtx->pix_fmt, w, h, ffmpeg::PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
if(img_convert_ctx == NULL)
{
printf("Cannot initialize the conversion context!\n");
return false;
}
ffmpeg::sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
// Convert the frame to QImage
LastFrame=QImage(w,h,QImage::Format_RGB888);
for(int y=0;y<h;y++)
memcpy(LastFrame.scanLine(y),pFrameRGB->data[0]+y*pFrameRGB->linesize[0],w*3);
// Set the time
DesiredFrameTime = ffmpeg::av_rescale_q(after,pFormatCtx->streams[videoStream]->time_base,millisecondbase);
LastFrameOk=true;
done = true;
} // frame of interest
} // frameFinished
} // stream_index==videoStream
av_free_packet(&packet); // Free the packet that was allocated by av_read_frame
}
//printf("Returning new frame %d # %d. LastLastT: %d. LastLastF: %d. LastFrameOk: %d\n",LastFrameNumber,LastFrameTime,LastLastFrameTime,LastLastFrameNumber,(int)LastFrameOk);
//printf("\n");
return done; // done indicates whether or not we found a frame
}

DirectShow ISampleGrabber: samples are upside-down and color channels reverse

I have to use MS DirectShow to capture video frames from a camera (I just want the raw pixel data).
I was able to build the Graph/Filter network (capture device filter and ISampleGrabber) and implement the callback (ISampleGrabberCB). I receive samples of appropriate size.
However, they are always upside down (flipped vertically that is, not rotated) and the color channels are BGR order (not RGB).
I tried setting the biHeight field in the BITMAPINFOHEADER to both positive and negative values, but it doesn't have any effect. According to MSDN documentation, ISampleGrapper::SetMediaType() ignores the format block for video data anyways.
Here is what I see (recorded with a different camera, not DS), and what DirectShow ISampleGrabber gives me: The "RGB" is actually in red, green and blue respectively:
Sample of the code I'm using, slightly simplified:
// Setting the media type...
AM_MEDIA_TYPE* media_type = 0 ;
this->ds.device_streamconfig->GetFormat(&media_type); // The IAMStreamConfig of the capture device
// Find the BMI header in the media type struct
BITMAPINFOHEADER* bmi_header;
if (media_type->formattype != FORMAT_VideoInfo) {
bmi_header = &((VIDEOINFOHEADER*)media_type->pbFormat)->bmiHeader;
} else if (media_type->formattype != FORMAT_VideoInfo2) {
bmi_header = &((VIDEOINFOHEADER2*)media_type->pbFormat)->bmiHeader;
} else {
return false;
}
// Apply changes
media_type->subtype = MEDIASUBTYPE_RGB24;
bmi_header->biWidth = width;
bmi_header->biHeight = height;
// Set format to video device
this->ds.device_streamconfig->SetFormat(media_type);
// Set format for sample grabber
// bmi_header->biHeight = -(height); // tried this for either and both interfaces, no effect
this->ds.sample_grabber->SetMediaType(media_type);
// Connect filter pins
IPin* out_pin= getFilterPin(this->ds.device_filter, OUT, 0); // IBaseFilter interface for the capture device
IPin* in_pin = getFilterPin(this->ds.sample_grabber_filter, IN, 0); // IBaseFilter interface for the sample grabber filter
out_pin->Connect(in_pin, media_type);
// Start capturing by callback
this->ds.sample_grabber->SetBufferSamples(false);
this->ds.sample_grabber->SetOneShot(false);
this->ds.sample_grabber->SetCallback(this, 1);
// start recording
this->ds.media_control->Run(); // IMediaControl interface
I'm checking return types for every function and don't get any errors.
I'm thankful for any hint or idea.
Things I already tried:
Setting the biHeight field to a negative value for either the capture device filter or the sample grabber or for both or for neither - doesn't have any effect.
Using IGraphBuilder to connect the pins - same problem.
Connecting the pins before changing the media type - same problem.
Checking if the media type was actually applied by the filter by querying it again - but it apparently is applied or at least stored.
Interpreting the image as total byte reversed (last byte first, first byte last) - then it would be flipped horizontally.
Checking if it's a problem with the video camera - when I test it with VLC (DirectShow capture) it looks normal.
My quick hack for this:
void Camera::OutputCallback(unsigned char* data, int len, void *instance_)
{
Camera *instance = reinterpret_cast<Camera*>(instance_);
int j = 0;
for (int i = len-4; i > 0; i-=4)
{
instance->buffer[j] = data[i];
instance->buffer[j + 1] = data[i + 1];
instance->buffer[j + 2] = data[i + 2];
instance->buffer[j + 3] = data[i + 3];
j += 4;
}
Transport::RTPPacket packet;
packet.payload = instance->buffer;
packet.payloadSize = len;
instance->receiver->Send(packet);
}
It's correct on RGB32 color space, for other color spaces this code need to be corrected
I noticed that when using the I420 color space turning disappears.
In addition, most current codecs (VP8) is used as a format raw I/O I420 color space.
I wrote a simple mirroring frame function in color space I420.
void Camera::OutputCallback(unsigned char* data, int len, uint32_t timestamp, void *instance_)
{
Camera *instance = reinterpret_cast<Camera*>(instance_);
Transport::RTPPacket packet;
packet.rtpHeader.ts = timestamp;
packet.payload = data;
packet.payloadSize = len;
if (instance->mirror)
{
Video::ResolutionValues rv = Video::GetValues(instance->resolution);
int k = 0;
// Chroma values
for (int i = 0; i != rv.height; ++i)
{
for (int j = rv.width; j != 0; --j)
{
int l = ((rv.width * i) + j);
instance->buffer[k++] = data[l];
}
}
// U values
for (int i = 0; i != rv.height/2; ++i)
{
for (int j = (rv.width/2); j != 0; --j)
{
int l = (((rv.width / 2) * i) + j) + rv.height*rv.width;
instance->buffer[k++] = data[l];
}
}
// V values
for (int i = 0; i != rv.height / 2; ++i)
{
for (int j = (rv.width / 2); j != 0; --j)
{
int l = (((rv.width / 2) * i) + j) + rv.height*rv.width + (rv.width/2)*(rv.height/2);
if (l == len)
{
instance->buffer[k++] = 0;
}
else
{
instance->buffer[k++] = data[l];
}
}
}
packet.payload = instance->buffer;
}
instance->receiver->Send(packet);
}