So i have a problem with converting a BYTE buffer to an image, (cv::Mat).
I am trying to read a real time video from a distant camera, and i got two elements, a pointer to the buffer and the buffer size, and i need to convert that to a cv::Mat image so that i can show it with cv::imshow. i tried to use:
cv::imdecode(cv::Mat(bufferSize,CV_8UC3,*buffer),cv::imread_color);
but it isn't working and i get this error:
error: (-215:Assertion failed) buf.checkVector(1, CV_8U) > 0 in function 'imdecode_'
when i try to convert directly without the imdecode function like this:
cv::Mat(bufferSize,CV_8UC3,*buffer)
i get an image but i can't show it so the programm just continue running without doing anything.
Can anyone help me please on how do we convert from BYTE buffer pointer to an cv::Mat image
EDIT:
Buffer is declared like this : BYTE *Buffer
the function where i get the buffer from is declared like this
void CALLBACK RealDataCallBackEx(LLONG lRealHandle, DWORD dwDataType, BYTE *pBuffer,DWORD dwBufSize, LONG param, LDWORD dwUser)
where:
lRealHandle : Real-time monitoring handle
dwDataType :
0 : Original data which is consistent with data saved by savedRealData
1 : Frame data.
2 : Yuv data.
3 : Pcm audio data.
pBuffer : Buffer for callback data. Data of different length will be called back according to different data type. The data are called back by frame for every type but type 0, and each time one frame is called back.
dwBufSize : Callback data length. The data buffers are diffreent for different types. The unit is BYTE
In my case i always get data type 0
This is how i try to decode then :
cv::Mat img = cv::imdecode(cv::Mat(dwBufSize,CV_8UC3,*pBuffer),cv::IMREAD_COLOR);
cv::Imshow("img",img);
my programm stops here, it continue running but it dosen't do anything after cuz i have put a std::cout here to check if it will pass this line of imshow or not but nothing happens
Thank you.
Related
I'm using FFmpe's swr_convert to convert AV_SAMPLE_FMT_FLTP audio. I've been successful converting to a different sample format (e.g. AV_SAMPLE_FMT_FLT and AV_SAMPLE_FMT_S16), but I'm running into trouble when I'm trying to keep the AV_SAMPLE_FMT_FLTP sample format but change the sample rate.
When converting AV_SAMPLE_FMT_FLTP to AV_SAMPLE_FMT_FLTP, swr_convert attempts to write to an empty buffer.
I'm using swr_convert to convert from 22050 Hz AV_SAMPLE_FMT_FLTP to 16000 Hz AV_SAMPLE_FMT_FLTP.
I initialized SwrContext like so:
if (swr_alloc_set_opts2(
&resample_context,
&pAVContext->ch_layout, AV_SAMPLE_FMT_FLTP, 16000,
&pAVContext->ch_layout, AV_SAMPLE_FMT_FLTP, 22050, 0, NULL) < 0)
return ERR_SWR_INIT_FAIL;
if(swr_init(resample_context) < 0)
return ERR_SWR_INIT_FAIL;
and when I call it like this, the program tries to write to a null buffer and crashes.
samples_decoded = swr_convert(ctx->pSwrContext,
&pDecodedAudio, numOutSamples,
(const uint8_t**)&pDecodedFrame->data, pDecodedFrame->nb_samples);
So far I've traced the problem to swr_convert_internal
if(s->int_sample_fmt == s->out_sample_fmt && s->out.planar
&& !(s->out_sample_fmt==AV_SAMPLE_FMT_S32P && (s->dither.output_sample_bits&31))){
//Sample format is planar and input format is same as output format
if(preout==in){
out_count= FFMIN(out_count, in_count);
av_assert0(s->in.planar);
copy(out, in, out_count);
return out_count;
}
else if(preout==postin) preout= midbuf= postin= out;
else if(preout==midbuf) preout= midbuf= out;
else preout= out;
}
That if bit of code assigns out to preout, but out's data is unitialized. Later on FFmpeg tries to write to the uninitialized block.
I've tested this in 5.1 and in the snapshot build, and it crashes both of them.
So, am I doing something wrong, or is this a bug?
I was doing something wrong. Packet audio is a contiguous block of memory and can be referenced by one pointer, but planar audio has a different pointer to each channel. To fix this, I got two pointers to my pDecodedAudio block.
uint8_t* convertedData [2] = {
pDecodedAudio ,
pDecodedAudio + (numOutSamples * ctx->output_sample_size)
};
samples_decoded = swr_convert(ctx->pSwrContext,
convertedData, numOutSamples,
pDecodedFrame->data, pDecodedFrame->nb_samples);
See the comments in AVFrame.
/*
* For planar audio, each channel has a separate data pointer, and
* linesize[0] contains the size of each channel buffer.
* For packed audio, there is just one data pointer, and linesize[0]
* contains the total size of the buffer for all channels.
*
* Note: Both data and extended_data should always be set in a valid frame,
* but for planar audio with more channels that can fit in data,
* extended_data must be used in order to access all channels.
*/
uint8_t **extended_data;
I've used the OpenH264 turorial (https://github.com/cisco/openh264/wiki/UsageExampleForDecoder) to successfully decode an H264 frame, but I can't figure out from the tutorial what the output format is.
I'm using the "unsigned char *pDataResult[3];" (pData in the tutorial), and this gets populated, but I need to know the length in order to convert it to byte buffers to return it to java. I also need to know what is the ownership of this data (it seems to be owned by the decoder). This info isn't mentioned in the tutorial or docs as far as I can find.
unsigned char *pDataResult[3];
int iRet = pSvcDecoder->DecodeFrameNoDelay(pBuf, iSize, pDataResult, &sDstBufInfo);
The tutorial also lists an initializer, but gives "..." as the assignment.
//output: [0~2] for Y,U,V buffer for Decoding only
unsigned char *pData[3] =...;
Is the YUV data null terminated?
There is the SBufferInfo last parameter with TagSysMemBuffer:
typedef struct TagSysMemBuffer {
int iWidth; ///< width of decoded pic for display
int iHeight; ///< height of decoded pic for display
int iFormat; ///< type is "EVideoFormatType"
int iStride[2]; ///< stride of 2 component
} SSysMEMBuffer;
And the length is probably in there, but not clear exactly. Maybe it is "iWidth*iHeight" for each buffer?
pData is freed in decoder destructor with WelsFreeDynamicMemory in decoder.cpp, just as you supposed.
Decoder itself assign nullptr's to channels, but it's fine to initialize pData with them as a good habit.
You have iSize parameter as input, that is the byte buffers length you want.
Is it possible that the PTS of a particular frame in a file is different with the PTS of the same frame in the same file while it is being streamed?
When I read a frame using av_read_frame I store the video stream in an AVStream. After I decode the frame with avcodec_decode_video2, I store the time stamp of that frame in an int64_t using av_frame_get_best_effort_timestamp. Now if the program is getting its input from a file I get a different timestamp from when I stream the input (from the same file) to the program.
To change the input type I simply change the argv argument from "/path/to/file.mp4" to something like "udp://localhost:1234", then I stream the file with ffmpeg in command line: "ffmpeg -re -i /path/to/file.mp4 -f mpegts udp://localhost:1234". Can it be because the "-f mpegts" arguments change some characteristics of the media?
Below is my code (simplified). By reading the ffmpeg mailing list archives I realized that the time_base that I'm looking for is in the AVStream and not the AVCodecContext. Instead of using av_frame_get_best_effort_timestamp I have also tried using the packet.pts but the results don't change.
I need the time stamps to have a notion of frame number in a streaming video that is being received.
I would really appreciate any sort of help.
//..
//argv[1]="/file.mp4";
argv[1]="udp://localhost:7777";
// define AVFormatContext, AVFrame, etc.
// register av, avcodec, avformat_network_init(), etc.
avformat_open_input(&pFormatCtx, argv, NULL, NULL);
avformat_find_stream_info(pFormatCtx, NULL);
// find the video stream...
// pointer to the codec context...
// open codec...
pFrame=av_frame_alloc();
while(av_read_frame(pFormatCtx, &packet)>=0) {
AVStream *strem = pFormatCtx->streams[videoStream];
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
int64_t perts = av_frame_get_best_effort_timestamp(pFrame);
if (isMyFrame(pFrame)){
cout << perts*av_q2d(strem->time_base) << "\n";
}
}
}
//free allocated space
}
//..
Timestamps are stored at the container level, so changing the container can change the timestamps. In addition, TS stores a timestamp for every frame (based on a 90kHz clock). MP4 only stores the frame durations with an assumed start time of 0 (this gets more complicated with bframes since the first PTS is zero, and the first DTS is < 0). So to get the time stamp all the frame durations are added. Mp4 also allows the clock rate be set. It is often 1001/3000 ticks per second for 29.97FPS, but it can be set to anything. so av_frame_get_best_effort_timestamp returns you ticks in codec->stream_base units. For TS codec->stream_base is always 1/90000
MSVS 2010 , Windows 7
I am using an API to access camera features.
The following function displays a frame and saves it.
void DisplayThread::OnBufferDisplay( PvBuffer *aBuffer )
{
mDisplayWnd->Display( *aBuffer ); //displaying frame
//Now let us try to save the frame with name of the form %Y%m%d%H%M%S.bmp
system("mkdir D:\\ABCD" );
struct tm *tm;
int count;
time_t t;
char str_time[20];
t = time(NULL);
tm = localtime(&t);
strftime(str_time, sizeof(str_time), "%Y%m%d%H%M%S.bmp", tm); //name of the frame
char name[1000]; //sufficient space
sprintf(name,"%s",str_time);
char path[]="D:\\ABCD";
strcat(path,name); //path =path+"\\"+name;
// char* str=(char*)(void*)Marshal::StringToHGlobalAnsi(path);
PvString lFilename( path );
PvString lCompleteFileName( lFilename );
PvBufferWriter lBufferWriter; //The following function saves image
PvResult lResult = lBufferWriter.Store( aBuffer, lCompleteFileName, PvBufferFormatBMP );
}
The name of the bmp file that is saved is of the form %Y%m%d%H%M%S.bmp
The program builds perfectly fine , even display is coming correctly,
but the following error message pops up:
It looks like something is wrong with the memory allocation with the variable 'name'.
But I have allocated sufficient space, even then I am getting this error.
Why it is happening ?
Kindly let me know if more info is required to debug this.
Note: The value returned by lBufferWriter.Store() is 'OK' (indicating that buffer/frame writing was successful), but no file is getting saved. I guess this is because of the run-time check failure I am getting.
Please help.
Your path[] array size is 8 and it is too small to hold the string after concatenation.
As this path variable is on the stack, it is corrupting your stack.
So, your buffer should be large enough to hold the data that you want to put into it.
In your case Just change the line to:
char path[1024]="D:\\ABCD";
Ok. This is a bit odd. Long story short. I am fething raw BGR images from a camera, compressing them to JPG with OpenCV and sending with UDP protocol to a PC. This is how I compress images:
// memblock contains raw image
IplImage* fIplImageHeader;
fIplImageHeader = cvCreateImageHeader(cvSize(160, 120), 8, 3);
fIplImageHeader->imageData = (char*) memblock;
// compress to JPG
vector<int> p;
p.push_back(CV_IMWRITE_JPEG_QUALITY);
p.push_back(75);
vector<unsigned char> buf;
cv::imencode(".jpg", fIplImageHeader, buf, p);
This is how I send them with UDP:
n_sent = sendto(sk,&*buf.begin(),(int)size,0,(struct sockaddr*) &server,sizeof(server));
This is how I receive them in a PC:
int iRcvdBytes=recvfrom(iSockFd,buff,bufferSize,0,
(struct sockaddr*)&cliAddr,(socklen_t*)&cliAddrLen);
// print how many bytes we have received
cout<<"Received "<<iRcvdBytes<<" bytes from the client"<<endl;
I am getting this output:
Received 57600 bytes from the client
Received 57600 bytes from the client
...
If I remove the JPG compression at the program fetching images from camera, the output is the same:
Received 57600 bytes from the client
Received 57600 bytes from the client
...
However, when I save the received image on a disk, it's size is around 7.8KB while uncompressed raw image saved to disk takes about 57KB space.
What's going on here?
The "size" you pass to send is the size of the compressed buffer, right? It's not obvious from your code snippets where "size" comes from (as ypnos suggests, I would have expected buf.size() ).
You don't use buf.size() when sending the packet. So you send more data than is actually contained in buf. In some cases you will get a segfault for that.