Running error, [h264 # 0x10af4a0] AVC: nal size [big number] - c++

I am using libavformat apis to get video frame from a MP4 video file. My code (c++) runs good in my personal computer, but when I try to deploy it into computing server, there are something strange happens. In the function 'av_read_frame()', some errors appear.
[h264 # 0x10af4a0] AVC: nal size 555453589
[h264 # 0x10af4a0] AVC: nal size 555453589
[h264 # 0x10af4a0] no frame!
My code is like this:
if (av_read_frame(_p_format_ctx, &_packet) < 0) {
return false;
}
But when this error occurs, the program doesn't exit. But the final results are wrong.
The OS of computing server is Linux, the kernel is 2.6.32.
The version of FFmpeg is 3.2.4.
The version of gcc is 4.8.2.

Related

Ffmpeg video output is 0 seconds with correct filesize when uploading to google cloud bucket

I've made a C++ program that lives in gke and takes some videos as input using ffmpeg, then does something with that input using opengl(not relevant), then finally encodes those edited videos as a single output. Normally the program works perfectly fine on my local machine, it encodes just as I want it to with no warnings or valgrind errors whatsoever. Then, after encoding the said video, I want my program to upload that video to the google cloud storage. This is where the problem comes, I have tried 2 methods for this: First, I tried using curl to upload to the cloud using a signed url. Second, I tried mounting the google storage using gcsfuse(I was already mounting the bucket to access the inputs in question). Both of those methods yielded undefined, weird behaviour's ranging from: Outputing a 0byte or 44byte file, (This is the most common one:) encoding in the correct file size ~500mb but the video is 0 seconds long, outputing a 0.4 second video or just encoding the desired output normally (really rare).
From the logs I can't see anything unusual, everything seems to work fine and ffmpeg does not give any errors or warnings, so does valgrind. Everything seems to work normally, even when I use curl to upload the video to the cloud the output is perfectly fine when it first encodes it (before sending it with curl) but the video gets messed up when curl uploads it to the cloud.
I'm using the muxing.c example of ffmpeg to encode my video with the only difference being:
void video_encoder::fill_yuv_image(AVFrame *frame, struct SwsContext *sws_context) {
const int in_linesize[1] = { 4 * width };
//uint8_t* dest[4] = { rgb_data, NULL, NULL, NULL };
sws_context = sws_getContext(
width, height, AV_PIX_FMT_RGBA,
width, height, AV_PIX_FMT_YUV420P,
SWS_BICUBIC, 0, 0, 0);
sws_scale(sws_context, (const uint8_t * const *)&rgb_data, in_linesize, 0,
height, frame->data, frame->linesize);
}
rgb_data is the data I got after editing the inputs. Again, this works fine and I don't think there are any errors here.
I'm not sure where the error is and since the code is huge I can't provide a replicable example. I'm just looking for someone to point me to the right direction.
Running the cloud's output in mplayer wields this result (This is when the video is the right size but is 0 seconds long, the most common one.):
MPlayer 1.4 (Debian), built with gcc-11 (C) 2000-2019 MPlayer Team
do_connect: could not connect to socket
connect: No such file or directory
Failed to open LIRC support. You will not be able to use your remote control.
Playing /media/c36c2633-d4ee-4d37-825f-88ae54b86100.
libavformat version 58.76.100 (external)
libavformat file format detected.
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f2cba1168e0]moov atom not found
LAVF_header: av_open_input_stream() failed
libavformat file format detected.
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f2cba1168e0]moov atom not found
LAVF_header: av_open_input_stream() failed
RAWDV file format detected.
VIDEO: [DVSD] 720x480 24bpp 29.970 fps 0.0 kbps ( 0.0 kbyte/s)
X11 error: BadMatch (invalid parameter attributes)
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory
[vdpau] Error when calling vdp_device_create_x11: 1
==========================================================================
Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
libavcodec version 58.134.100 (external)
[dvvideo # 0x7f2cb987a380]Requested frame threading with a custom get_buffer2() implementation which is not marked as thread safe. This is not supported anymore, make your callback thread-safe.
Selected video codec: [ffdv] vfm: ffmpeg (FFmpeg DV)
==========================================================================
Load subtitles in /media/
==========================================================================
Opening audio decoder: [libdv] Raw DV Audio Decoder
Unknown/missing audio format -> no sound
ADecoder init failed :(
Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders
[dvaudio # 0x7f2cb987a380]Decoder requires channel count but channels not set
Could not open codec.
ADecoder init failed :(
ADecoder init failed :(
Cannot find codec for audio format 0x56444152.
Audio: no sound
Starting playback...
[dvvideo # 0x7f2cb987a380]could not find dv frame profile
Error while decoding frame!
[dvvideo # 0x7f2cb987a380]could not find dv frame profile
Error while decoding frame!
V: 0.0 2/ 2 ??% ??% ??,?% 0 0
Exiting... (End of file)
Edit: Since the code runs on a VM, I'm using xvfb-run ro start my application, but again even when using xvfb-run it works completely fine on when not encoding to the cloud.
Apparently, I'm assuming for security reasons, the google cloud storage does not allow us to do multiple continuous operations on a file, just a singular read/write operation. So I found a workaround by encoding my video to a local file inside the pod and then doing a copy operation to the cloud.

How do I properly unwrap FLV video into raw and valid h264 segments for gstreamer buffers?

I have written an RTMP server in rust that successfully allows RTMP publishers to connect, push a video stream, and RTMP clients can connect and watch those video streams successfully.
When a video RTMP packet comes in, I attempt to unwrap the video from the FLV container via:
// TODO: The FLV spec has the AVCPacketType and composition time as the first parts of the
// AVCPACKETTYPE. It's unclear if these two fields are part of h264 or FLV specific.
let flv_tag = data.split_to(1);
let is_sequence_header;
let codec = if flv_tag[0] & 0x07 == 0x07 {
is_sequence_header = data[0] == 0x00;
VideoCodec::H264
} else {
is_sequence_header = false;
VideoCodec::Unknown
};
let is_keyframe = flv_tag[0] & 0x10 == 0x10;
After this runs data contains the AVCVIDEOPACKET with the flv tag removed. When I send this video to other RTMP clients i just prepend the correct flv tag to it and send it off.
Now I am trying to pass the video packets to gstreamer in order to do in process transcoding. To do this I set up an appsrc | avdec_264 pipeline, and gave the appsrc component the following caps:
video_source.set_caps(Some(
&Caps::builder("video/x-h264")
.field("alignment", "nal")
.field("stream-format", "byte-stream")
.build()
));
Now when an RTMP publisher sends a video packet, I take the (attempted) unwrapped video packet and pass it to my appsrc via
pub fn push_video(&self, data: Bytes, timestamp: RtmpTimestamp) {
let mut buffer = Buffer::with_size(data.len()).unwrap();
{
let buffer = buffer.get_mut().unwrap();
buffer.set_pts(ClockTime::MSECOND * timestamp.value as u64);
let mut samples = buffer.map_writable().unwrap();
{
let samples = samples.as_mut_slice();
for index in 0..data.len() {
samples[index] = data[index];
}
}
}
self.video_source.push_buffer(buffer).unwrap();
}
When this occurs the following gstreamer debug output appears
2022-02-09T18:25:15Z INFO gstreamer_mmids_scratchpad] Pushing packet #0 (is_sequence_header:true, is_keyframe=true)
[2022-02-09T18:25:15Z INFO gstreamer_mmids_scratchpad] Connection 63397d56-16fb-4b54-a622-d991b5ad2d8e sent audio data
0:00:05.531722000 7516 000001C0C04011C0 INFO GST_EVENT gstevent.c:973:gst_event_new_segment: creating segment event bytes segment start=0, offset=0, stop=-1, rate=1.000000, applied_rate=1.000000, flags=0x00, time=0, base=0, position 0, duration -1
0:00:05.533525000 7516 000001C0C04011C0 INFO basesrc gstbasesrc.c:3018:gst_base_src_loop:<video_source> marking pending DISCONT
0:00:05.535385000 7516 000001C0C04011C0 WARN videodecoder gstvideodecoder.c:2818:gst_video_decoder_chain:<video_decode> Received buffer without a new-segment. Assuming timestamps start from 0.
0:00:05.537381000 7516 000001C0C04011C0 INFO GST_EVENT gstevent.c:973:gst_event_new_segment: creating segment event time segment start=0:00:00.000000000, offset=0:00:00.000000000, stop=99:99:99.999999999, rate=1.000000, applied_rate=1.000000, flags=0x00, time=0:00:00.000000000, base=0:00:00.000000000, position 0:00:00.000000000, duration 99:99:99.999999999
[2022-02-09T18:25:15Z INFO gstreamer_mmids_scratchpad] Pushing packet #1 (is_sequence_header:false, is_keyframe=true)
0:00:05.563445000 7516 000001C0C04011C0 INFO libav :0:: Invalid NAL unit 0, skipping.
[2022-02-09T18:25:15Z INFO gstreamer_mmids_scratchpad] Pushing packet #2 (is_sequence_header:false, is_keyframe=false)
0:00:05.579274000 7516 000001C0C04011C0 ERROR libav :0:: No start code is found.
0:00:05.581338000 7516 000001C0C04011C0 ERROR libav :0:: Error splitting the input into NAL units.
0:00:05.583337000 7516 000001C0C04011C0 WARN libav gstavviddec.c:2068:gst_ffmpegviddec_handle_frame:<video_decode> Failed to send data for decoding
[2022-02-09T18:25:15Z INFO gstreamer_mmids_scratchpad] Pushing packet #3 (is_sequence_header:false, is_keyframe=false)
0:00:05.595253000 7516 000001C0C04011C0 ERROR libav :0:: No start code is found.
0:00:05.597204000 7516 000001C0C04011C0 ERROR libav :0:: Error splitting the input into NAL units.
0:00:05.599262000 7516 000001C0C04011C0 WARN libav gstavviddec.c:2068:gst_ffmpegviddec_handle_frame:<video_decode> Failed to send data for decoding
Based on this I figured this might be caused by the non-data portions of the AVCVIDEOPACKET not being part of the h264 flow, but an FLV specific flow. So I tried ignoring the first 4 bytes (AVCPacketType and CompositionTime fields) of each packet I wrote to the buffer:
pub fn push_video(&self, data: Bytes, timestamp: RtmpTimestamp) {
let mut buffer = Buffer::with_size(data.len() - 4).unwrap();
{
let buffer = buffer.get_mut().unwrap();
buffer.set_pts(ClockTime::MSECOND * timestamp.value as u64);
let mut samples = buffer.map_writable().unwrap();
{
let samples = samples.as_mut_slice();
for index in 4..data.len() {
samples[index - 4] = data[index];
}
}
}
self.video_source.push_buffer(buffer).unwrap();
}
This essentially gave me the same logging output and errors. This is reproducible with the h264parse plugin as well.
What am I missing in the unwrapping process to pass raw h264 video to gstreamer?
Edit:
Realizing I misread the pad template I tried the following caps instead
video_source.set_caps(Some(
&Caps::builder("video/x-h264")
.field("alignment", "au")
.field("stream-format", "avc")
.build()
));
This also failed with pretty simmilar output.
I think I finally figured this out.
The first thing is that I need to include removing the AVCVIDEOPACKET headers (packet type and composition time fields). These are not part of the h264 format and thus cause parsing errors.
The second thing I needed to do was to not pass the sequence header as a buffer to the source. Instead the sequence header bytes need to be set as the codec_data field for the appsrc's caps. This now allows for no parsing errors when passing the video data to h264parse, and even gives me a correctly sized window.
The third thing I was missing is the correct dts and pts values. It turns out the RTMP timestamp I'm given is the dts, and pts = AVCVIDEOPACKET.CompositionTime + dts.

opencart image uploading error in admin panel

i am working in opencart 2.3.0.2
Whenever i upload product image every time i am getting error
Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 22464 bytes) in /XXXX/XXXX/public_html/system/library/image.php on line 26
my image size is in kb still getting error and i also try
in index.php
ini_set("memory_limit",2048);
and in php.ini file
memory_limit = 128M;
still I didn't get any solution. Any help, appreciated.

ffmpeg based multi threaded c++ application fails on decoding

I am using remuxing example from ffmpeg sources as reference. I wrote a multi-threaded application based on boost threads to perform a codec copy and remux using ffmpeg API. That works fine . The problem arises when I try to decode the frame
"
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &pkt);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error decoding video %s\n",av_make_error_string(errorBuff,80,ret));
return -1;
}"
I need the decoded frame to convert it to Opencv Mat object. For a single instance this code works fine. But as soon as I run multiple threads I start getting decoding errors like these
left block unavailable for requested intra mode at 0 0
[h264 # 0x7f9a48115100] error while decoding MB 0 0, bytestream 1479
[h264 # 0x7f9a480825e0] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
[h264 # 0x7f9a480ae680] error while decoding MB 13 5, bytestream -20
[h264 # 0x7f9a48007700] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
[h264 # 0x7f9a48110340] top block unavailable for requested intra4x4 mode -1 at 31 0
[h264 # 0x7f9a48110340] error while decoding MB 31 0, bytestream 1226
[h264 # 0x7f9a48115100] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
[h264 # 0x7f9a480825e0] top block unavailable for requested intra4x4 mode -1 at 4 0
[h264 # 0x7f9a480825e0] error while decoding MB 4 0, bytestream 1292
[h264 # 0x7f9a480ae680] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
All variables used by ffmpeg api are declared local to the thread function. I am not sure how ffmpeg frame allocs or context allocs work.
any help in making the decoding process multi-threaded ?
Update:
I have included ff_lockmgr
static int ff_lockmgr(void **mutex, enum AVLockOp op)
{
pthread_mutex_t** pmutex = (pthread_mutex_t**) mutex;
switch (op) {
case AV_LOCK_CREATE:
*pmutex = (pthread_mutex_t*) malloc(sizeof(pthread_mutex_t));
pthread_mutex_init(*pmutex, NULL);
break;
case AV_LOCK_OBTAIN:
pthread_mutex_lock(*pmutex);
break;
case AV_LOCK_RELEASE:
pthread_mutex_unlock(*pmutex);
break;
case AV_LOCK_DESTROY:
pthread_mutex_destroy(*pmutex);
free(*pmutex);
break;
}
return 0;
}
and initialized it as well "av_lockmgr_register(ff_lockmgr);"
Now the video is being decoded in all threads BUT the images saved from the decoded frame using FFMPEG AVFrame to OpenCv Mat conversion and imwrite results in garbled (mixed) frame. Part of the frame is from one camera and rest is from another or the image doesnt make any sense at all.
Not every format decoder supports multiple threads, and even for the decoders which support it, it might not be supported for a particular file.
For example, consider a MPEG4 file with a single keyframe at the beginning, followed by P frames. In this case every next frame depends on previous, and using multiple threads would not likely produce any benefits.
In my app I had to disable multithreaded encoders because of that.

X264 encoding using Opencv

I am working with a high resolution camera: 4008x2672. I a writing a simple program which grabs frame from the camera and sends the frame to a avi file. For working with such a high resolution, I found only x264 codec that could do the trick (Suggestions welcome). I am using opencv for most of the image handling stuff. As mentioned in this post http://doom10.org/index.php?topic=1019.0 , I modified the AVCodecContext members as per ffmpeg presets for libx264 (Had to do this to avoid broken ffmpeg defaults settings error). This is output I am getting when I try to run the program
libx264 # 0x992d040]non-strictly-monotonic PTS
1294846981.526675 1 0 //Timestamp camera_no frame_no
1294846981.621101 1 1
1294846981.715521 1 2
1294846981.809939 1 3
1294846981.904360 1 4
1294846981.998782 1 5
1294846982.093203 1 6
Last message repeated 7 times
[avi # 0x992beb0]st:0 error, non monotone timestamps
-614891469123651720 >= -614891469123651720
OpenCV Error: Unspecified error (Error while writing video frame) in
icv_av_write_frame_FFMPEG, file
/home/ajoshi/ext/OpenCV-2.2.0/modules/highgui/src/cap_ffmpeg.cpp, line 1034
terminate called after throwing an instance of 'cv::Exception'
what(): /home/ajoshi/ext/OpenCV-2.2.0/modules/highgui/src/cap_ffmpeg.cpp:1034:
error: (-2) Error while writing video frame in function icv_av_write_frame_FFMPEG
Aborted
Modifications to the AVCodecContext are:
if(codec_id == CODEC_ID_H264)
{
//fprintf(stderr, "Trying to parse a preset file for libx264\n");
//Setting Values manually from medium preset
c->me_method = 7;
c->qcompress=0.6;
c->qmin = 10;
c->qmax = 51;
c->max_qdiff = 4;
c->i_quant_factor=0.71;
c->max_b_frames=3;
c->b_frame_strategy = 1;
c->me_range = 16;<br>
c->me_subpel_quality=7;
c->coder_type = 1;
c->scenechange_threshold=40;
c->partitions = X264_PART_I8X8 | X264_PART_I4X4 | X264_PART_P8X8 | X264_PART_B8X8;
c->flags = CODEC_FLAG_LOOP_FILTER;
c->flags2 = CODEC_FLAG2_BPYRAMID | CODEC_FLAG2_MIXED_REFS | CODEC_FLAG2_WPRED | CODEC_FLAG2_8X8DCT | CODEC_FLAG2_FASTPSKIP;
c->keyint_min = 25;
c->refs = 3;
c->trellis=1;
c->directpred = 1;
c->weighted_p_pred=2;
}
I am probably not setting the dts and pts values which I believed ffmpeg should be setting it for me.
Any sugggestions welcome.
Thanks in advance
I would probably run the x264 executable in another process and pipe either rgb or yuv pixels to it. Then you can use all the normal x264 (or ffmpeg) flags and it handles multi threading for you.
And since x264 is GPL licensed it also gives you more freedom on licensing your app.
ps. Here is some sample code using ffmpeg from Qt you can ignore the Qt specific bits but it gives a good starting point for using ffmpeg from a c++ app.
Actual error is "non monotone timestamps". I seems that you didn't properly initialized video frame properties. If its possible use libx264 directly. It'll be more easy to handle.
PS. you can work around ffmpeg x264 setting problem by specify 264 preset file with -fvpre option.
The pts value of the AVFrame you send as the last argument to avcodec_encode_video needs to be set by you. Once you set this, the codec context's coded_from->pts field will have the correct value which you can av_rescale_q() and set in the AVPacket for your av_interleaved_write_frame().