copy YUV video frames from one buffer to another in Gstreamer - gstreamer

I am extremely new to Gstreamer. I am writing a plugin to reduce the height of a YUV video by 2. I get a segmentation fault when I try to copy data from the buf(argument to chain) to another buffer in the _chain() function as follows :
GstBuffer *buffer;
glong size;
size = GST_BUFFER_SIZE(buf);
buffer = gst_buffer_new ();
GST_BUFFER_SIZE (buffer) = size;
GST_BUFFER_MALLOCDATA (buffer) = g_malloc (size);
GST_BUFFER_DATA (buffer) = GST_BUFFER_MALLOCDATA (buffer);
memcpy(buffer,buf,size);
Kindly help a newbie :)
Thank you

You are copying over the buffer object! Try using:
buffer = gst_buffer_new_and_alloc(size);
memcpy(GST_BUFFER_DATA(buffer),GST_BUFFER_DATA(buf),size);
You could also do
buffer = gst_buffer_copy(buf);
GST_BUFFER_SIZE (buffer) = size;

Related

OpenGL to FFMpeg encode

I have a opengl buffer that I need to forward directly to ffmpeg to do the nvenc based h264 encoding.
My current way of doing this is glReadPixels to get the pixels out of the frame buffer and then passing that pointer into ffmpeg such that it can encode the frame into H264 packets for RTSP. However, this is bad because I have to copy bytes out of the GPU ram into CPU ram, to only copy them back into the GPU for encoding.
If you look at the date of posting versus the date of this answer you'll notice I spent much time working on this. (It was my full time job the past 4 weeks).
Since I had such a difficult time getting this to work I will write up a short guide to hopefully help out whomever finds this.
Outline
The basic flow I have is OGL Frame buffer object color attachement (texture) → nvenc (nvidia encoder)
Things to note
Some things to note:
1) The nvidia encoder can accept YUV or RGB type images.
2) FFMPEG 4.0 and under cannot pass RGB images to nvenc.
3) FFMPEG was updated to accept RGB as input, per my issues.
There are a couple different things to know about:
1) AVHWDeviceContext- Think of this as ffmpegs device abstraction layer.
2) AVHWFramesContext- Think of this as ffmpegs hardware frame abstraction layer.
3) cuMemcpy2D- The required method to copy a cuda mapped OGL texture into a cuda buffer created by ffmpeg.
Comprehensiveness
This guide is in addition to standard software encoding guidelines. This is NOT complete code, and should only be used in addition to the standard flow.
Code details
Setup
You will need to first get your gpu name, to do this I found some code (I cannot remember where I got it from) that made some cuda calls and got the GPU name:
int getDeviceName(std::string& gpuName)
{
//Setup the cuda context for hardware encoding with ffmpeg
NV_ENC_BUFFER_FORMAT eFormat = NV_ENC_BUFFER_FORMAT_IYUV;
int iGpu = 0;
CUresult res;
ck(cuInit(0));
int nGpu = 0;
ck(cuDeviceGetCount(&nGpu));
if (iGpu < 0 || iGpu >= nGpu)
{
std::cout << "GPU ordinal out of range. Should be within [" << 0 << ", "
<< nGpu - 1 << "]" << std::endl;
return 1;
}
CUdevice cuDevice = 0;
ck(cuDeviceGet(&cuDevice, iGpu));
char szDeviceName[80];
ck(cuDeviceGetName(szDeviceName, sizeof(szDeviceName), cuDevice));
gpuName = szDeviceName;
epLog::msg(epMSG_STATUS, "epVideoEncode:H264Encoder", "...using device \"%s\"", szDeviceName);
return 0;
}
Next you will need to setup your hwdevice and hwframe contexts:
getDeviceName(gpuName);
ret = av_hwdevice_ctx_create(&m_avBufferRefDevice, AV_HWDEVICE_TYPE_CUDA, gpuName.c_str(), NULL, NULL);
if (ret < 0)
{
return -1;
}
//Example of casts needed to get down to the cuda context
AVHWDeviceContext* hwDevContext = (AVHWDeviceContext*)(m_avBufferRefDevice->data);
AVCUDADeviceContext* cudaDevCtx = (AVCUDADeviceContext*)(hwDevContext->hwctx);
m_cuContext = &(cudaDevCtx->cuda_ctx);
//Create the hwframe_context
// This is an abstraction of a cuda buffer for us. This enables us to, with one call, setup the cuda buffer and ready it for input
m_avBufferRefFrame = av_hwframe_ctx_alloc(m_avBufferRefDevice);
//Setup some values before initialization
AVHWFramesContext* frameCtxPtr = (AVHWFramesContext*)(m_avBufferRefFrame->data);
frameCtxPtr->width = width;
frameCtxPtr->height = height;
frameCtxPtr->sw_format = AV_PIX_FMT_0BGR32; // There are only certain supported types here, we need to conform to these types
frameCtxPtr->format = AV_PIX_FMT_CUDA;
frameCtxPtr->device_ref = m_avBufferRefDevice;
frameCtxPtr->device_ctx = (AVHWDeviceContext*)m_avBufferRefDevice->data;
//Initialization - This must be done to actually allocate the cuda buffer.
// NOTE: This call will only work for our input format if the FFMPEG library is >4.0 version..
ret = av_hwframe_ctx_init(m_avBufferRefFrame);
if (ret < 0) {
return -1;
}
//Cast the OGL texture/buffer to cuda ptr
CUresult res;
CUcontext oldCtx;
m_inputTexture = texture;
res = cuCtxPopCurrent(&oldCtx); // THIS IS ALLOWED TO FAIL
res = cuCtxPushCurrent(*m_cuContext);
res = cuGraphicsGLRegisterImage(&cuInpTexRes, m_inputTexture, GL_TEXTURE_2D, CU_GRAPHICS_REGISTER_FLAGS_READ_ONLY);
res = cuCtxPopCurrent(&oldCtx); // THIS IS ALLOWED TO FAIL
//Assign some hardware accel specific data to AvCodecContext
c->hw_device_ctx = m_avBufferRefDevice;//This must be done BEFORE avcodec_open2()
c->pix_fmt = AV_PIX_FMT_CUDA; //Since this is a cuda buffer, although its really opengl with a cuda ptr
c->hw_frames_ctx = m_avBufferRefFrame;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->sw_pix_fmt = AV_PIX_FMT_0BGR32;
// Setup some cuda stuff for memcpy-ing later
m_memCpyStruct.srcXInBytes = 0;
m_memCpyStruct.srcY = 0;
m_memCpyStruct.srcMemoryType = CUmemorytype::CU_MEMORYTYPE_ARRAY;
m_memCpyStruct.dstXInBytes = 0;
m_memCpyStruct.dstY = 0;
m_memCpyStruct.dstMemoryType = CUmemorytype::CU_MEMORYTYPE_DEVICE;
Keep in mind, although there is a lot done above, the code shown is IN ADDITION to the standard software encoding code. Make sure to include all those calls/object initialization as well.
Unlike the software version, all that is needed for the input AVFrame object is to get the buffer AFTER your alloc call:
// allocate RGB video frame buffer
ret = av_hwframe_get_buffer(m_avBufferRefFrame, rgb_frame, 0); // 0 is for flags, not used at the moment
Notice it takes in the hwframe_context as an argument, this is how it knows what device, size, format, etc to allocate for on the gpu.
Call to encode each frame
Now we are setup, and are ready to encode. Before each encode we need to copy the frame from the texture to a cuda buffer. We do this by mapping a cuda array to the texture then copying that array to a cuDeviceptr (which was allocated by the av_hwframe_get_buffer call above):
//Perform cuda mem copy for input buffer
CUresult cuRes;
CUarray mappedArray;
CUcontext oldCtx;
//Get context
cuRes = cuCtxPopCurrent(&oldCtx); // THIS IS ALLOWED TO FAIL
cuRes = cuCtxPushCurrent(*m_cuContext);
//Get Texture
cuRes = cuGraphicsResourceSetMapFlags(cuInpTexRes, CU_GRAPHICS_MAP_RESOURCE_FLAGS_READ_ONLY);
cuRes = cuGraphicsMapResources(1, &cuInpTexRes, 0);
//Map texture to cuda array
cuRes = cuGraphicsSubResourceGetMappedArray(&mappedArray, cuInpTexRes, 0, 0); // Nvidia says its good practice to remap each iteration as OGL can move things around
//Release texture
cuRes = cuGraphicsUnmapResources(1, &cuInpTexRes, 0);
//Setup for memcopy
m_memCpyStruct.srcArray = mappedArray;
m_memCpyStruct.dstDevice = (CUdeviceptr)rgb_frame->data[0]; // Make sure to copy devptr as it could change, upon resize
m_memCpyStruct.dstPitch = rgb_frame->linesize[0]; // Linesize is generated by hwframe_context
m_memCpyStruct.WidthInBytes = rgb_frame->width * 4; //* 4 needed for each pixel
m_memCpyStruct.Height = rgb_frame->height; //Vanilla height for frame
//Do memcpy
cuRes = cuMemcpy2D(&m_memCpyStruct);
//release context
cuRes = cuCtxPopCurrent(&oldCtx); // THIS IS ALLOWED TO FAIL
Now we can simply call send_frame and it all works!
ret = avcodec_send_frame(c, rgb_frame);
Note: I left most of my code out, since it is not for the public. I may have some details incorrect, this is how I was able to make sense of all the data I gathered over the past month...feel free to correct anything that is incorrect. Also, fun fact, during all this my computer crashed an I lost all my initial investigation (everything I didnt check into source control), which includes all the various example code I found around the internet. So if you see something an its yours, call it out please. This can help others come to the conclusion that I came to.
Shoutout
Big shout out to BtbN at https://webchat.freenode.net/ #ffmpeg, I wouldnt have gotten any of this without their help.
First thing to check is that it may be "bad" but is it running fast enough anyway? It's always nice to be more efficient but if it works, don't break it.
If there really is a performance problem...
1 Use FFMPEG software encoding only, without hardware assistance. Then you'll only be copying from GPU to CPU once. (If the video encoder is on the GPU and you're sending packets out via RTSP, there's a second GPU to CPU after encoding.)
2 Look for an NVIDIA (I assume that's the GPU given you talk about nvenc) GL extension to texture formats and/or commands that will perform on GPU H264 encoding directly to OpenGL buffers.

NVencs Output Bitstream is not readable

I have one question related to Nvidias NVenc API. I want to use the API to encode some OpenGL graphics. My problem is, that the API reports no error throughout the whole program, everything seems to be fine. But the generated output is not readable by, e.g. VLC. If I try to play the generated file, VLC would flash a black screen for about 0.5s, then ends the playback.
The Video has the length of 0, the size of the Vid seems rather small, too.
Resolution is 1280*720 and the size of 5secs recording is only 700kb. Is this realistic?
The flow of the application is as following:
Render to secondary Framebuffer
Download Framebuffer to one of two PBOs (glReadPixels())
Map the PBO of the previous frame, to get a pointer understandable by Cuda.
Call a simple CudaKernel converting OpenGLs RGBA to ARGB which should be understandable by NVenc according to this(p.18). The kernel reads the content of the PBO and writes the converted content into a CudaArray (created with cudaMalloc) which is registered as InputResource with NVenc.
The content of the converted Array gets encoded. A completion event plus the corresponding output bitstream buffer get queued.
A secondary thread listens on the queued output events, if one event is signaled, the Output Bitstream gets mapped and written to hdd.
The initializion of NVenc-Encoder:
InitParams* ip = new InitParams();
m_initParams = ip;
memset(ip, 0, sizeof(InitParams));
ip->version = NV_ENC_INITIALIZE_PARAMS_VER;
ip->encodeGUID = m_encoderGuid; //Used Codec
ip->encodeWidth = width; // Frame Width
ip->encodeHeight = height; // Frame Height
ip->maxEncodeWidth = 0; // Zero means no dynamic res changes
ip->maxEncodeHeight = 0;
ip->darWidth = width; // Aspect Ratio
ip->darHeight = height;
ip->frameRateNum = 60; // 60 fps
ip->frameRateDen = 1;
ip->reportSliceOffsets = 0; // According to programming guide
ip->enableSubFrameWrite = 0;
ip->presetGUID = m_presetGuid; // Used Preset for Encoder Config
NV_ENC_PRESET_CONFIG presetCfg; // Load the Preset Config
memset(&presetCfg, 0, sizeof(NV_ENC_PRESET_CONFIG));
presetCfg.version = NV_ENC_PRESET_CONFIG_VER;
presetCfg.presetCfg.version = NV_ENC_CONFIG_VER;
CheckApiError(m_apiFunctions.nvEncGetEncodePresetConfig(m_Encoder,
m_encoderGuid, m_presetGuid, &presetCfg));
memcpy(&m_encodingConfig, &presetCfg.presetCfg, sizeof(NV_ENC_CONFIG));
// And add information about Bitrate etc
m_encodingConfig.rcParams.averageBitRate = 500000;
m_encodingConfig.rcParams.maxBitRate = 600000;
m_encodingConfig.rcParams.rateControlMode = NV_ENC_PARAMS_RC_MODE::NV_ENC_PARAMS_RC_CBR;
ip->encodeConfig = &m_encodingConfig;
ip->enableEncodeAsync = 1; // Async Encoding
ip->enablePTD = 1; // Encoder handles picture ordering
Registration of CudaResource
m_cuContext->SetCurrent(); // Make the clients cuCtx current
NV_ENC_REGISTER_RESOURCE res;
memset(&res, 0, sizeof(NV_ENC_REGISTER_RESOURCE));
NV_ENC_REGISTERED_PTR resPtr; // handle to the cuda resource for future use
res.bufferFormat = m_inputFormat; // Format is ARGB
res.height = m_height;
res.width = m_width;
// NOTE: I've set the pitch to the width of the frame, because the resource is a non-pitched
//cudaArray. Is this correct? Pitch = 0 would produce no output.
res.pitch = pitch;
res.resourceToRegister = (void*) (uintptr_t) resourceToRegister; //CUdevptr to resource
res.resourceType =
NV_ENC_INPUT_RESOURCE_TYPE::NV_ENC_INPUT_RESOURCE_TYPE_CUDADEVICEPTR;
res.version = NV_ENC_REGISTER_RESOURCE_VER;
CheckApiError(m_apiFunctions.nvEncRegisterResource(m_Encoder, &res));
m_registeredInputResources.push_back(res.registeredResource);
Encoding
m_cuContext->SetCurrent(); // Make Clients context current
MapInputResource(id); //Map the CudaInputResource
NV_ENC_PIC_PARAMS temp;
memset(&temp, 0, sizeof(NV_ENC_PIC_PARAMS));
temp.version = NV_ENC_PIC_PARAMS_VER;
unsigned int currentBufferAndEvent = m_counter % m_registeredEvents.size(); //Counter is inc'ed in every Frame
temp.bufferFmt = m_currentlyMappedInputBuffer.mappedBufferFmt;
temp.inputBuffer = m_currentlyMappedInputBuffer.mappedResource; //got set by MapInputResource
temp.completionEvent = m_registeredEvents[currentBufferAndEvent];
temp.outputBitstream = m_registeredOutputBuffers[currentBufferAndEvent];
temp.inputWidth = m_width;
temp.inputHeight = m_height;
temp.inputPitch = m_width;
temp.inputTimeStamp = m_counter;
temp.pictureStruct = NV_ENC_PIC_STRUCT_FRAME; // According to samples
temp.qpDeltaMap = NULL;
temp.qpDeltaMapSize = 0;
EventWithId latestEvent(currentBufferAndEvent,
m_registeredEvents[currentBufferAndEvent]);
PushBackEncodeEvent(latestEvent); // Store the Event with its ID in a Queue
CheckApiError(m_apiFunctions.nvEncEncodePicture(m_Encoder, &temp));
m_counter++;
UnmapInputResource(id); // Unmap
Every little hint, where to look at, is very much appreciated. I'm running out of ideas what might be wrong.
Thanks a lot!
With the help of hall822 from the nvidia forums I managed to solve the issue.
The primary error was that I registered my cuda resource with a pitch equal to the size of the frame. I'm using a Framebuffer-Renderbuffer to draw my content into. The data of this is a plain, unpitched array. My first thought, giving a pitch equal to zero, failed. The encoder did nothing. The next idea was to set it to the width of the frame, a quarter of the image was encoded.
// NOTE: I've set the pitch to the width of the frame, because the resource is a non-pitched
//cudaArray. Is this correct? Pitch = 0 would produce no output.
res.pitch = pitch;
To answer this question: Yes, it is correct. But the pitch is measured in byte. So because I'm encoding RGBA-Frames, the correct pitch has to be FRAME_WIDTH * 4.
The second error was that my color channels were not right (See point 4 in my opening post). The NVidia enum says that the encoder expects the channels in ARGB format but actually ment is BGRA, so the alpha channel which is always 255 polluted the blue channel.
Edit: This may be due to the fact that NVidia is using little endian internally. I'm writing
my pixel data to a byte array, choosing an other type like int32 may allow one to pass actual ARGB data.

Caching AVFrames using av_read_frame in a loop only get last couple of frames

I'm trying to process a video frame by frame using opengl.
I use ffmpeg to read frames from the video file.
Before I start processing the frames, I want read some frames and store them in the memory first.
So I try to using av_read_frame in a while loop, and copy frame data into a array,
But, when I try to display those frames, I find that I only get the last a couple of frames .
For example, If I want to cache 50 frames, but I can only get the last couple frames (frame 45 to frame 50).
Here's the code I'm using to caching the frames:
void cacheFrames()
{
AVPacket tempPacket;
av_init_packet(&tempPacket);
int i = 0;
avcodec_flush_buffers(formatContext->streams[streamIndex] ->codec);
codecContext = formatContext->streams[streamIndex] ->codec;
while (av_read_frame(formatContext, &tempPacket) >= 0 && i <NUM_FRAMES)
{
int finished = 0;
if (tempPacket.stream_index == streamIndex)
{
avcodec_decode_video2(
codecContext,
frame,
&finished,
&tempPacket);
if (finished)
{
memcpy(datas[i].datas, frame->data, sizeof(frame->data)); // copy the frame data into an array
i++;
}
}
}
av_free_packet(&tempPacket);
}
So, what I'm doing wrong?
data is defined as
uint8_t* AVFrame::data[AV_NUM_DATA_POINTERS]
The operation
memcpy(datas[i].datas, frame->data, sizeof(frame->data)); // copy the frame data into an array
will copy AV_NUM_DATA_POINTERS pointers into datas[i].datas. This is incorrect because you only copy references to frame buffers you havent allocated yourself. Plus only the last frame(s) buffers are guaranteed to be available after avcodec_decode_video2.
To keep the data as long as you want to need to clone the frame.
AVFrame* framearray[NUM_FRAMES];
...
if (finished)
{
framearray[i] = av_frame_clone(frame);
i++;
}

Load pixmap from buffer in QT

I currently have the following details about an image
int nBufSize ; //contains the buffer size
void* lpBmpBuf; //The pointer to the first byte of the bitmap buffer
How can I obtain a QPixmap from this ?
Here is what I am doing so far
QByteArray b((char*)lpBmpBuf,nBufSize);
bool t = pix.loadFromData(b, 0, Qt::AutoColor);
However t is false in this case. Any suggestions ?
Copy the bitmap buffer into a byte array as you also have the length, then:
QPixmap::loadFromData(&data, 0, Qt::AutoColor);
data is the QByteArray in my example.
Also if you know the extension/type of the file you can specify it in the 2nd argument:
loadFromData(&data, "BMP");

NVIDIA CUDA Video Encoder (NVCUVENC) input from device texture array

I am modifying CUDA Video Encoder (NVCUVENC) encoding sample found in SDK samples pack so that the data comes not from external yuv files (as is done in the sample ) but from cudaArray which is filled from texture.
So the key API method that encodes the frame is:
int NVENCAPI NVEncodeFrame(NVEncoder hNVEncoder, NVVE_EncodeFrameParams *pFrmIn, unsigned long flag, void *pData);
If I get it right the param :
CUdeviceptr dptr_VideoFrame
is supposed to pass the data to encode.But I really haven't understood how to connect it with some texture data on GPU.The sample source code is very vague about it as it works with CPU yuv files input.
For example in main.cpp , lines 555 -560 there is following block:
// If dptrVideoFrame is NULL, then we assume that frames come from system memory, otherwise it comes from GPU memory
// VideoEncoder.cpp, EncodeFrame() will automatically copy it to GPU Device memory, if GPU device input is specified
if (pCudaEncoder->EncodeFrame(efparams, dptrVideoFrame, cuCtxLock) == false)
{
printf("\nEncodeFrame() failed to encode frame\n");
}
So ,from the comment, it seems like dptrVideoFrame should be filled with yuv data coming from device to encode the frame.But there is no place where it is explained how to do so.
UPDATE:
I would like to share some findings.First , I managed to encode data from Frame Buffer texture.The problem now is that the output video is a mess.
That is the desired result:
Here is what I do :
On OpenGL side I have 2 custom FBOs-first gets the scene rendered normally into it .Then the texture from the first FBO is used to render screen quad into second FBO doing RGB -> YUV conversion in the fragment shader.
The texture attached to second FBO is mapped then to CUDA resource.
Then I encode the current texture like this:
void CUDAEncoder::Encode(){
NVVE_EncodeFrameParams efparams;
efparams.Height = sEncoderParams.iOutputSize[1];
efparams.Width = sEncoderParams.iOutputSize[0];
efparams.Pitch = (sEncoderParams.nDeviceMemPitch ? sEncoderParams.nDeviceMemPitch : sEncoderParams.iOutputSize[0]);
efparams.PictureStruc = (NVVE_PicStruct)sEncoderParams.iPictureType;
efparams.SurfFmt = (NVVE_SurfaceFormat)sEncoderParams.iSurfaceFormat;
efparams.progressiveFrame = (sEncoderParams.iSurfaceFormat == 3) ? 1 : 0;
efparams.repeatFirstField = 0;
efparams.topfieldfirst = (sEncoderParams.iSurfaceFormat == 1) ? 1 : 0;
if(_curFrame > _framesTotal){
efparams.bLast=1;
}else{
efparams.bLast=0;
}
//----------- get cuda array from the texture resource -------------//
checkCudaErrorsDrv(cuGraphicsMapResources(1,&_cutexResource,NULL));
checkCudaErrorsDrv(cuGraphicsSubResourceGetMappedArray(&_cutexArray,_cutexResource,0,0));
/////////// copy data into dptrvideo frame //////////
// LUMA based on CUDA SDK sample//////////////
CUDA_MEMCPY2D pcopy;
memset((void *)&pcopy, 0, sizeof(pcopy));
pcopy.srcXInBytes = 0;
pcopy.srcY = 0;
pcopy.srcHost= NULL;
pcopy.srcDevice= 0;
pcopy.srcPitch =efparams.Width;
pcopy.srcArray= _cutexArray;///SOME DEVICE ARRAY!!!!!!!!!!!!! <--------- to figure out how to fill this.
/// destination //////
pcopy.dstXInBytes = 0;
pcopy.dstY = 0;
pcopy.dstHost = 0;
pcopy.dstArray = 0;
pcopy.dstDevice=dptrVideoFrame;
pcopy.dstPitch = sEncoderParams.nDeviceMemPitch;
pcopy.WidthInBytes = sEncoderParams.iInputSize[0];
pcopy.Height = sEncoderParams.iInputSize[1];
pcopy.srcMemoryType=CU_MEMORYTYPE_ARRAY;
pcopy.dstMemoryType=CU_MEMORYTYPE_DEVICE;
// CHROMA based on CUDA SDK sample/////
CUDA_MEMCPY2D pcChroma;
memset((void *)&pcChroma, 0, sizeof(pcChroma));
pcChroma.srcXInBytes = 0;
pcChroma.srcY = 0;// if I uncomment this line I get error from cuda for incorrect value.It does work in CUDA SDK original sample SAMPLE//sEncoderParams.iInputSize[1] << 1; // U/V chroma offset
pcChroma.srcHost = NULL;
pcChroma.srcDevice = 0;
pcChroma.srcArray = _cutexArray;
pcChroma.srcPitch = efparams.Width >> 1; // chroma is subsampled by 2 (but it has U/V are next to each other)
pcChroma.dstXInBytes = 0;
pcChroma.dstY = sEncoderParams.iInputSize[1] << 1; // chroma offset (srcY*srcPitch now points to the chroma planes)
pcChroma.dstHost = 0;
pcChroma.dstDevice = dptrVideoFrame;
pcChroma.dstArray = 0;
pcChroma.dstPitch = sEncoderParams.nDeviceMemPitch >> 1;
pcChroma.WidthInBytes = sEncoderParams.iInputSize[0] >> 1;
pcChroma.Height = sEncoderParams.iInputSize[1]; // U/V are sent together
pcChroma.srcMemoryType = CU_MEMORYTYPE_ARRAY;
pcChroma.dstMemoryType = CU_MEMORYTYPE_DEVICE;
checkCudaErrorsDrv(cuvidCtxLock(cuCtxLock, 0));
checkCudaErrorsDrv( cuMemcpy2D(&pcopy));
checkCudaErrorsDrv( cuMemcpy2D(&pcChroma));
checkCudaErrorsDrv(cuvidCtxUnlock(cuCtxLock, 0));
//=============================================
// If dptrVideoFrame is NULL, then we assume that frames come from system memory, otherwise it comes from GPU memory
// VideoEncoder.cpp, EncodeFrame() will automatically copy it to GPU Device memory, if GPU device input is specified
if (_encoder->EncodeFrame(efparams, dptrVideoFrame, cuCtxLock) == false)
{
printf("\nEncodeFrame() failed to encode frame\n");
}
checkCudaErrorsDrv(cuGraphicsUnmapResources(1, &_cutexResource, NULL));
// computeFPS();
if(_curFrame > _framesTotal){
_encoder->Stop();
exit(0);
}
_curFrame++;
}
I set Encoder params from the .cfg files included with CUDA SDK Encoder sample.So here I use 704x480-h264.cfg setup .I tried all of them and getting always similarly ugly result.
I suspect the problem is somewhere in CUDA_MEMCPY2D for luma and chroma objects params setup .May be wrong pitch , width ,height dimensions.I set the viewport the same size as the video (704,480) and compared params to those used in CUDA SDK sample but got no clue where the problem is.
Anyone ?
First: I messed around with Cuda Video Encoder, and had lots of troubles to. But it Looks to me as if you convert it to Yuv values, but as a one on one Pixel conversion (like AYUV 4:4:4). Afaik you need the correct kind of YUV with padding and compression (color values for more than one Pixel like 4:2:0). A good overview of YUV-alignments can be seen here:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx
As far as I remember you have to use NV12 alignment for Cuda Encoder.
nvEncoder application is used for codec conversion,for processing over GPU its used cuda and communicating with hardware it use API of nvEncoder.
in this application logic is read yuv data in input buffer and store that content in memory and then start encoding the frames.
and parallel write the encoding frame in to output file.
Handling of input buffer is available in nvRead function and it is available in nvFileIO.h
any other help required leave a message here...