OpenCV IP camera application crashes [h264 # 0xxxxx] missing picture in access unit - c++

I have an opencv application in cpp.
It captures video stream and saves it to video files with the simple constructs from opencv.
It works perfectly with my webcam.
But, it crashes maybe after about ten seconds, while I run it to capture the stream from IP Camara.
My compile command is:
g++ -O3 IP_Camera_linux.cpp -o IP_Camera `pkg-config --cflags --libs opencv`
My Stream from IP cam is accessed like this:
const string Stream = "rtsp://admin:xxxx#192.168.0.101/";
It does run perfectly, shows video and saves it until the displayed video freezes and the application crashes. While the error message on the terminal is:
[h264 # 0x15e6f60] error while decoding MB 59 31, bytestream (-20)
[h264 # 0x15e8200] error while decoding MB 61 27, bytestream (-3)
[h264 # 0x109c880] missing picture in access unit
[h264 # 0x109c000] no frame!
To my understanding, the fist two lines in the above error message might have something to do but does not actually crash the application. The last two lines are probably the reasons or the cause?
Any help?

Got the solution after lots of hit and trial. Just changed the stream address a bit and it worked.
From:
const string Stream = "rtsp://admin:xxxx#192.168.0.101/";
To:
const string Stream = "rtsp://admin:xxxx#192.168.0.101/ch1-s1?tcp";
NO idea, what change it did make?
BUT IT WORKS PERFECTLY!!!
Even the pervasive warnings of the form:
[h264 # 0x15e6f60] error while decoding MB 59 31, bytestream (-20)
[h264 # 0x15e8200] error while decoding MB 61 27, bytestream (-3)
are gone.
Anyways would appreciate if some one could explain it with the logical reason behind.
CREDIT

It is an error from ffmpeg. Probably your ffmpeg is old version and you may want to update it. It solved the problem perfectly for my case by reinstalling the latest opencv and ffmpeg as follows:
Install latest ffmpeg
git clone git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg
./configure --enable-shared --disable-static
make
sudo make install
Install the latest opencv
git clone git#github.com:opencv/opencv.git
cd opencv
mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Release
make
sudo make install

As a citation to the original answer, adding ?tcp to the end forces the rtsp connection to run using the tcp protocol instead of the udp protocol which is useful if you do not actively check for any connection problem and therefore you can't afford to have any packet loss.
For robust running you can check for NULL image in you loop and if you get a NULL image, you can reset the camera connection:
IplImage *img = cvQueryFrame(camera);
if (img == NULL) {
printf("img == null ");
fflush(stdout);
camera = cvCreateFileCapture("rtsp://admin:xxxx#192.168.0.101/ch1-s1?tcp");
}

Related

libvpx "Codec does not implement requested capability" (decoder)

I'm currently facing an issue on a project using libvpx v1.10.0 ( https://github.com/webmproject/libvpx/releases ).
I have successfully built the library for Visual Studio 16 on Windows 10 (PC x64).[I must build libvpx by my own since I need it to run on a Windows 10 ARM64 / VS16 as well (Hololens 2) and a such build is not officially provided]
I've made a C++ DLL that uses the static libs from libvpx (to be used as a native plugin in Unity).
While the VP9 encoding part seems to work correctly in a sample app using my DLL, I cannot initialize the VP9 decoder. Maybe I am missing something in the configuration step of libvpx?
To build the libvpx static libraries, I have launched MSYS2 from the x64 Native Tools Command Prompt of Visual Studio 2019.
Then, I have set the configuration as follows, inspired by what we can find in an ArchLinux AUR package ( https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libvpx-git ):
./configure --target=x86_64-win64-vs16 --enable-libyuv \
--enable-vp8 --enable-vp9 --enable-postproc --enable-vp9-postproc \
--enable-vp9-highbitdepth --enable-vp9-temporal-denoising
make -j
At the end of the compilation, the build succeeds with 0 error but 2 warnings. The --help of the configure scripts indicates that the --enable-vp9 option enables both the VP9 encoder and decoder.
Then, when I run my app using the C++ DLL that performs the encoding and decoding stuff, I get this error message from libvpx:
Codec does not implement requested capability .
It occurs when I call the vpx_codec_dec_init() function. I don't understand why it cannot be initialized as I think that the VP9 codec is fully built. The error appears as well when I add the --enable-vp9-encoder and --enable-vp9-decoder` options and all other VP9 related options to the configuration.
Is there something to do in the code itself before initializing the VP9 decoder? I have not seen a such thing in the samples of code. Notice that the problem occurs if I use VP8 as well (encoding OK / decoding KO, same error).
Here is the beginning of my function for decoding a frame:
vpx_codec_err_t resultError;
vpx_codec_ctx_t codec;
const vpx_codec_iface_t* decoderInterface = vpx_codec_vp9_cx(); // >>> OK!
if (!decoderInterface)
{
return "libvpx: unsupported codec (decoder)";
}
resultError = vpx_codec_dec_init(&codec, decoderInterface, nullptr, 0); // >>> KO...
if (resultError)
{
std::cout << vpx_codec_error(&codec) << std::endl; // outputs "Codec does not implement requested capability"
return "libvpx: failed to initialize decoder";
}
vpx_codec_iter_t iter = nullptr;
vpx_image_t* yuvFrame = nullptr;
resultError = vpx_codec_decode(&codec, compressedFrame, (unsigned int)compressedFrameSize, nullptr, 0);
if (resultError)
{
return "libvpx: failed to decode frame";
}
// ....
Any help would be great! Thank you. :)
OK, I've figured it out! :)
The line:
const vpx_codec_iface_t* decoderInterface = vpx_codec_vp9_cx();
must be replaced by (+ #include <vpx/vp8dx.h>):
const vpx_codec_iface_t* decoderInterface = vpx_codec_vp9_dx();
The reason I have made this error is due to a previous experience in encoding/decoding videos. I've developed a webcam streaming app using the H.264 codec, which needs a set up "context" structure. So, because of the name of the vpx_codec_vp9_cx() function, I've thought it was creating a such context for VP9. In fact, cx matches for encoding and dx for decoding... Not really obvious though. I don't like this kind of function names.
Anyway, I hope it will help anybody in a same situation. ;)

Realtime desktop capturing Mac OS X Mojave and X11

I'm working on project which streams desktop image from Mac OS X computer to iOS device in realtime. My main problem is the screen capture. I'm not allowed to use ready libraries, which allow to write some lines of code in 5 mins and stream video over the World.
I've found a really good thing on GitHub which gets image of the whole screen using X11 and C++ :
https://github.com/Butataki/cpp-x11-make-screenshot
I've tested this code on my Ubuntu and everything works like a charm : it takes about 12ms just to capture 1 frame without saving data, and about 25ms with encoding to .jpg and saving on the disk.
To be able to build it, I've had done this :
$ sudo apt install libjpeg-dev libpng-dev libx11-dev
, changed 'true' to 'TRUE' in those lines :
//(screenshot.cpp : 232,233 lines)
jpeg_set_quality (&cinfo, quality, TRUE);
jpeg_start_compress(&cinfo, TRUE);
and changed Z_BEST_COMPRESSION to PNG_Z_DEFAULT_COMPRESSION
The problem is that I did almost the same operations in my XCode (Mac OS Mojave 10.14), downloaded and linked all necessary libraries, ran executable and finally....I got a blank image. No errors occured, everything works 'fine' and saves .jpg image in my folder on desktop.
Then I figured out that X11 has a something called the 'root window', which covers all of the desktop and you can just find this window and capture everyting on your screen. But I think it's true for Ubuntu, not for my Mac.
Actually, there is something about 'root window' in this article, but I just can't fix anything :
https://finkers.wordpress.com/running-x11/#intro.rootless
P.S If it's not a good way, maybe there are some another ways to acomplish my task (realtime screencapturing on Mac OS)?

mplayer doesn't like popen suddenly?

Been stumped over a problem with popen for a few days.
The code:
int main(){
FILE *fp = popen("mplayer /home/linaro/Music/cp.mp3", "r");
char buffer[1028];
while (fgets(buffer, 1028, fp) != NULL)
{
std::cerr<<buffer;
}
pclose(fp);
fp = 0;
return 0;
}
was run at the shop on a linaro embedded controller. Ran fine, no errors. Even though the code is simple (Just plays a coldplay song on execution and then quits) it seems to be causing me more grief than one would think.
If I copy and paste the command itself into a normal bash shell, it runs fine. And what's worse, is it ran fine up until it got plugged in somewhere else. I'm not sure if it is now being connected to via a headphone jack versus before it was in a line out jack.
Regardless, the error I get when I run it is (summarized down to the relevant part)
MPlayer svn r34540 (Ubuntu), built with gcc-4.6 (C) 2000-2012 MPlaye4
Team mplayer: could not connect to socket mplayer: No such file or
directory Failed to open LIRC support. You will not be able to use
your remote control.
Playing /home/linaro/Music/cp.mp3 libavformat version 53.21.1
(external) Mismatching header version 53.19.0 Audio only file format
detected. Clip info: Title: The Scientist Artist: Coldplay Album: A
Rush Of Blood To The Head Year: 2002 Comment: Genre: Unknown Load
subtitles in /home/linaro/Music/ Requested audio codec family [mpg123]
(afm=mpg123) not available. Enable it at compilation. Opening audio
decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version
53.35.0 (external) Mismatching header version 53.32.2 AUDIO: 44100 Hz, 2
ch, floatle, 256.0 kbit/9.07% (ratio: 32002->352800) Selected audio
codec: [ffmp3float] afm: ffmpeg (FFmpeg MPEG layer-3 audio) Home
directory not accessible: Permission denied AO: [pulse] Init failed:
Connection refused Failed to initialize audio driver 'pulse' Home
directory not accessible: Permission denied [AO_ALSA] alsa-lib:
pcm_hw.c:1293:(snd_pcm_hw_open) open '/dev/snd/pcmC1D0p' failed (-22):
Invalid argument [AO_ALSA] Playback open error: Invalid argument
Failed to initialize audio driver 'alsa' [AO SDL] Samplerate: 44100Hz
Channels: Stereo Format floatle [AO SDL] using aalib audio driver. [AO
SDL] Unsupported audio format: 0x1d. [AO SDL] Unable to open audio: No
available audio device Failed to initialize audio driver 'sdl:aalib'
Could not open/initialize audio device -> no sound. Audio: no sound
Video: no video
Again, if I copy and paste the exact command it executes via popen into the console, it begins playing. It fails if pass -ao alsa, -ao pulse, -ao oss as well, which has me completely stumped. Any help would be appreciated!
Edit:
Linux is linaro, based on Ubuntu 12.04 using arm CPU
Issue wound up being as Jonas and alk suggested -- permissions. Something along the way changed how it ran, but at the end of the day the dirty fix was to just include
su - user -c ' mplayer ... '
To the line being called.

Reading video files with OpenCV VideoCapture

I am having trouble being able top open any video files in OpenCV besides those encoded in MJPEG.
I have installed OpenCV using this script (which should compile OpenCV with support for ffmpeg) and an testing using the sample provided here.
When running with a h264 encoded video I get:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x123ed80] multiple edit list entries, a/v desync might occur, patch welcome
[h264 # 0x12465e0] A non-intra slice in an IDR NAL unit.
[h264 # 0x12465e0] decode_slice_header error
Could not open the output video for write: test.mp4
When running with an MPEG-2 encoded video I get:
[mpegts # 0x1e92d80] PES packet size mismatch
[mpegts # 0x1e92d80] PES packet size mismatch
[mpegts # 0x1e92d80] max_analyze_duration reached
[mpegts # 0x1e92d80] PES packet size mismatch
Could not open the output video for write: test.mpeg
I am running x64 Ubuntu 12.04.
EDIT: I tried OpenCV 2.4.8 on a Ubuntu 13.10 x86 VM, ffmpeg works fine, however the sample code still fails, this time with the following error:
[h264 # 0x849ff40] A non-intra slice in an IDR NAL unit.
[h264 # 0x849ff40] decode_slice_header error
Could not find encoder for codec id 28: Encoder not foundOpenCV Error: Unsupported format or combination of formats (Gstreamer Opencv backend doesn't support this codec acutally.) in CvVideoWriter_GStreamer::open, file /home/dan/Install-OpenCV/Ubuntu/2.4/OpenCV/opencv-2.4.8/modules/highgui/src/cap_gstreamer.cpp, line 505
terminate called after throwing an instance of 'cv::Exception'
what(): /home/dan/Install-OpenCV/Ubuntu/2.4/OpenCV/opencv-2.4.8/modules/highgui/src/cap_gstreamer.cpp:505: error: (-210) Gstreamer Opencv backend doesn't support this codec acutally. in function CvVideoWriter_GStreamer::open
I am not sure about the main reason for this. But I guess this problem is relate to the decoder installed on your system. According to the install script from github, it removes your ffmpg and x264 then rebuilds them from source code. Before testing your OpenCV code, try simple "ffmpeg" command on your test video.
Such as: ffmpeg -i inputfile.avi -f image2 image-%3d.jpeg
The script you have used is quite old and installs old version of OpenCV (2.4.2, while the latest stable is 2.4.8), try to use this script - https://github.com/jayrambhia/Install-OpenCV/blob/master/Ubuntu/2.4/opencv2_4_8.sh or install OpenCV and ffmpeg on you own.
As rookiepig mentioned - check whether ffmpeg is working.
Try to use different codec - here http://www.fourcc.org/ is full list of options, of course testing all of them is useless - jsut try the most popular codecs.
I know that it's stupid, but on Windows some codecs works only in release mode(okay, probably they are working in both modes, but on my machine they used to work only in release mode). Try to compile you program in both modes and check whether there is some difference.
And show us you code, maybe there is something wrong in it.

Issue with running open CV face detection on Mac mountain lion

I ran into an issue getting the standard open CV face detection (facedetect) working. The web cam light comes on but noting happens, the program is launched with a tiny sized window like this:
I am working from an excellent blog post and sample code. Here I what I have done:
Install OpenCV & get OpenCV source
brew tap homebrew/science
brew install --with-tbb opencv
wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.6/opencv-2.4.6.tar.gz
tar xvzf opencv-2.4.6.tar.gz
Run the facedetect sample with the standard classifier.
cd ~/opencv-2.4.6/samples/c
chmod +x build_all.sh
./build_all.sh
./facedetect --cascade="../../data/haarcascades/haarcascade_frontalface_alt.xml"
I can modify the C++ sample code and recompile and run, but I have no idea what the issue is.
Does anyone have a suggestion?
Update The issue is the image from cvQueryFrame is empty:
IplImage* iplImg = cvQueryFrame( capture );
frame = iplImg;
if( frame.empty() )
{
cout << "FRAME EMPTY\n"; // This is getting logged
break;
}
Update: It works ok when the source is a static image, so the issue is something related to the webcam source.
You can try to localise the problem, did you try to capture an image from the and web cam show it, without running any other operation?
It seems there is a problem capturing image from the web cam via OpenCV, this kind of problems may happen due to hardware, for instance on my friends macbook pro captured image was 320x240 and on mine it was 640x480. My friend just changed a simple configuration from settings of the camera and his problem was solved. Your problem might be something like this.
Or you can try to run face detector just with some images, you need to change the code such that it loads an image from your disk and try to detect face on them. If it doesn't work that way either we can say that your problem is not camera, there is a bigger issue, or if it works we can surely say that the problem is web cam.
EDIT
If you are using IplImage type be sure to get couple more images from the camera, sometimes first image is empty.
This was due to a bug in OpenCV - its been fixed ( bug report here http://code.opencv.org/issues/3156) , but the version in homebrew/science is from before the fix.
You can get install a newer version by editing the brew formula for opencv ( based on this pull request https://github.com/Homebrew/homebrew-science/pull/540 )
edit /usr/local/Library/Formula/opencv.rb , and replace these lines:
url 'http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.6.1/opencv-2.4.6.1.tar.gz'
sha1 'e015bd67218844b38daf3cea8aab505b592a66c0'
with these ones
url 'https://github.com/Itseez/opencv/archive/2.4.7.1.tar.gz'
sha1 'b6b0dd72356822a482ca3a27a7a88145aca6f34c'
Then do
brew remove opencv
brew install opencv
Works on Mavericks (for me at least), should work on Mountain Lion
UPDATE: the version of OpenCV in homebrew/science has now been updated, so this answer is now out of date!
brew upgrade opencv
will make homebrew get the latest version, with fixed webcam capture.